Traffic Flow Management Using Aggregate Flow Models and the Development of Disaggregation Methods
NASA Technical Reports Server (NTRS)
Sun, Dengfeng; Sridhar, Banavar; Grabbe, Shon
2010-01-01
A linear time-varying aggregate traffic flow model can be used to develop Traffic Flow Management (tfm) strategies based on optimization algorithms. However, there are no methods available in the literature to translate these aggregate solutions into actions involving individual aircraft. This paper describes and implements a computationally efficient disaggregation algorithm, which converts an aggregate (flow-based) solution to a flight-specific control action. Numerical results generated by the optimization method and the disaggregation algorithm are presented and illustrated by applying them to generate TFM schedules for a typical day in the U.S. National Airspace System. The results show that the disaggregation algorithm generates control actions for individual flights while keeping the air traffic behavior very close to the optimal solution.
NASA Astrophysics Data System (ADS)
Müller, H.; Haberlandt, U.
2018-01-01
Rainfall time series of high temporal resolution and spatial density are crucial for urban hydrology. The multiplicative random cascade model can be used for temporal rainfall disaggregation of daily data to generate such time series. Here, the uniform splitting approach with a branching number of 3 in the first disaggregation step is applied. To achieve a final resolution of 5 min, subsequent steps after disaggregation are necessary. Three modifications at different disaggregation levels are tested in this investigation (uniform splitting at Δt = 15 min, linear interpolation at Δt = 7.5 min and Δt = 3.75 min). Results are compared both with observations and an often used approach, based on the assumption that a time steps with Δt = 5.625 min, as resulting if a branching number of 2 is applied throughout, can be replaced with Δt = 5 min (called the 1280 min approach). Spatial consistence is implemented in the disaggregated time series using a resampling algorithm. In total, 24 recording stations in Lower Saxony, Northern Germany with a 5 min resolution have been used for the validation of the disaggregation procedure. The urban-hydrological suitability is tested with an artificial combined sewer system of about 170 hectares. The results show that all three variations outperform the 1280 min approach regarding reproduction of wet spell duration, average intensity, fraction of dry intervals and lag-1 autocorrelation. Extreme values with durations of 5 min are also better represented. For durations of 1 h, all approaches show only slight deviations from the observed extremes. The applied resampling algorithm is capable to achieve sufficient spatial consistence. The effects on the urban hydrological simulations are significant. Without spatial consistence, flood volumes of manholes and combined sewer overflow are strongly underestimated. After resampling, results using disaggregated time series as input are in the range of those using observed time series. Best overall performance regarding rainfall statistics are obtained by the method in which the disaggregation process ends at time steps with 7.5 min duration, deriving the 5 min time steps by linear interpolation. With subsequent resampling this method leads to a good representation of manhole flooding and combined sewer overflow volume in terms of hydrological simulations and outperforms the 1280 min approach.
SMAP Soil Moisture Disaggregation using Land Surface Temperature and Vegetation Data
NASA Astrophysics Data System (ADS)
Fang, B.; Lakshmi, V.
2016-12-01
Soil moisture (SM) is a key parameter in agriculture, hydrology and ecology studies. The global SM retrievals have been providing by microwave remote sensing technology since late 1970s and many SM retrieval algorithms have been developed, calibrated and applied on satellite sensors such as AMSR-E (Advanced Microwave Scanning Radiometer for the Earth Observing System), AMSR-2 (Advanced Microwave Scanning Radiometer 2) and SMOS (Soil Moisture and Ocean Salinity). Particularly, SMAP (Soil Moisture Active/Passive) satellite, which was developed by NASA, was launched in January 2015. SMAP provides soil moisture products of 9 km and 36 km spatial resolutions which are not capable for research and applications of finer scale. Toward this issue, this study applied a SM disaggregation algorithm to disaggregate SMAP passive microwave soil moisture 36 km product. This algorithm was developed based on the thermal inertial relationship between daily surface temperature variation and daily average soil moisture which is modulated by vegetation condition, by using remote sensing retrievals from AVHRR (Advanced Very High Resolution Radiometer, MODIS (Moderate Resolution Imaging Spectroradiometer), SPOT (Satellite Pour l'Observation de la Terre), as well as Land Surface Model (LSM) output from NLDAS (North American Land Data Assimilation System). The disaggregation model was built at 1/8o spatial resolution on monthly basis and was implemented to calculate and disaggregate SMAP 36 km SM retrievals to 1 km resolution in Oklahoma. The SM disaggregation results were also validated using MESONET (Mesoscale Network) and MICRONET (Microscale Network) ground SM measurements.
Disaggregation Of Passive Microwave Soil Moisture For Use In Watershed Hydrology Applications
NASA Astrophysics Data System (ADS)
Fang, Bin
In recent years the passive microwave remote sensing has been providing soil moisture products using instruments on board satellite/airborne platforms. Spatial resolution has been restricted by the diameter of antenna which is inversely proportional to resolution. As a result, typical products have a spatial resolution of tens of kilometers, which is not compatible for some hydrological research applications. For this reason, the dissertation explores three disaggregation algorithms that estimate L-band passive microwave soil moisture at the subpixel level by using high spatial resolution remote sensing products from other optical and radar instruments were proposed and implemented in this investigation. The first technique utilized a thermal inertia theory to establish a relationship between daily temperature change and average soil moisture modulated by the vegetation condition was developed by using NLDAS, AVHRR, SPOT and MODIS data were applied to disaggregate the 25 km AMSR-E soil moisture to 1 km in Oklahoma. The second algorithm was built on semi empirical physical models (NP89 and LP92) derived from numerical experiments between soil evaporation efficiency and soil moisture over the surface skin sensing depth (a few millimeters) by using simulated soil temperature derived from MODIS and NLDAS as well as AMSR-E soil moisture at 25 km to disaggregate the coarse resolution soil moisture to 1 km in Oklahoma. The third algorithm modeled the relationship between the change in co-polarized radar backscatter and the remotely sensed microwave change in soil moisture retrievals and assumed that change in soil moisture was a function of only the canopy opacity. The change detection algorithm was implemented using aircraft based the remote sensing data from PALS and UAVSAR that were collected in SMPAVEX12 in southern Manitoba, Canada. The PALS L-band h-polarization radiometer soil moisture retrievals were disaggregated by combining them with the PALS and UAVSAR L-band hh-polarization radar spatial resolutions of 1500 m and 5 m/800 m, respectively. All three algorithms were validated using ground measurements from network in situ stations or handheld hydra probes. The validation results demonstrate the practicability on coarse resolution passive microwave soil moisture products.
Rainfall disaggregation for urban hydrology: Effects of spatial consistence
NASA Astrophysics Data System (ADS)
Müller, Hannes; Haberlandt, Uwe
2015-04-01
For urban hydrology rainfall time series with a high temporal resolution are crucial. Observed time series of this kind are very short in most cases, so they cannot be used. On the contrary, time series with lower temporal resolution (daily measurements) exist for much longer periods. The objective is to derive time series with a long duration and a high resolution by disaggregating time series of the non-recording stations with information of time series of the recording stations. The multiplicative random cascade model is a well-known disaggregation model for daily time series. For urban hydrology it is often assumed, that a day consists of only 1280 minutes in total as starting point for the disaggregation process. We introduce a new variant for the cascade model, which is functional without this assumption and also outperforms the existing approach regarding time series characteristics like wet and dry spell duration, average intensity, fraction of dry intervals and extreme value representation. However, in both approaches rainfall time series of different stations are disaggregated without consideration of surrounding stations. This yields in unrealistic spatial patterns of rainfall. We apply a simulated annealing algorithm that has been used successfully for hourly values before. Relative diurnal cycles of the disaggregated time series are resampled to reproduce the spatial dependence of rainfall. To describe spatial dependence we use bivariate characteristics like probability of occurrence, continuity ratio and coefficient of correlation. Investigation area is a sewage system in Northern Germany. We show that the algorithm has the capability to improve spatial dependence. The influence of the chosen disaggregation routine and the spatial dependence on overflow occurrences and volumes of the sewage system will be analyzed.
Non-Intrusive Load Monitoring Approaches for Disaggregated Energy Sensing: A Survey
Zoha, Ahmed; Gluhak, Alexander; Imran, Muhammad Ali; Rajasegarar, Sutharshan
2012-01-01
Appliance Load Monitoring (ALM) is essential for energy management solutions, allowing them to obtain appliance-specific energy consumption statistics that can further be used to devise load scheduling strategies for optimal energy utilization. Fine-grained energy monitoring can be achieved by deploying smart power outlets on every device of interest; however it incurs extra hardware cost and installation complexity. Non-Intrusive Load Monitoring (NILM) is an attractive method for energy disaggregation, as it can discern devices from the aggregated data acquired from a single point of measurement. This paper provides a comprehensive overview of NILM system and its associated methods and techniques used for disaggregated energy sensing. We review the state-of-the art load signatures and disaggregation algorithms used for appliance recognition and highlight challenges and future research directions. PMID:23223081
Load Disaggregation Technologies: Real World and Laboratory Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Sullivan, Greg P.; Petersen, Joseph M.
Low cost interval metering and communication technology improvements over the past ten years have enabled the maturity of load disaggregation (or non-intrusive load monitoring) technologies to better estimate and report energy consumption of individual end-use loads. With the appropriate performance characteristics, these technologies have the potential to enable many utility and customer facing applications such as billing transparency, itemized demand and energy consumption, appliance diagnostics, commissioning, energy efficiency savings verification, load shape research, and demand response measurement. However, there has been much skepticism concerning the ability of load disaggregation products to accurately identify and estimate energy consumption of end-uses; whichmore » has hindered wide-spread market adoption. A contributing factor is that common test methods and metrics are not available to evaluate performance without having to perform large scale field demonstrations and pilots, which can be costly when developing such products. Without common and cost-effective methods of evaluation, more developed disaggregation technologies will continue to be slow to market and potential users will remain uncertain about their capabilities. This paper reviews recent field studies and laboratory tests of disaggregation technologies. Several factors are identified that are important to consider in test protocols, so that the results reflect real world performance. Potential metrics are examined to highlight their effectiveness in quantifying disaggregation performance. This analysis is then used to suggest performance metrics that are meaningful and of value to potential users and that will enable researchers/developers to identify beneficial ways to improve their technologies.« less
NASA Astrophysics Data System (ADS)
Serrat-Capdevila, A.; Valdes, J. B.
2005-12-01
An optimization approach for the operation of international multi-reservoir systems is presented. The approach uses Stochastic Dynamic Programming (SDP) algorithms, both steady-state and real-time, to develop two models. In the first model, the reservoirs and flows of the system are aggregated to yield an equivalent reservoir, and the obtained operating policies are disaggregated using a non-linear optimization procedure for each reservoir and for each nation water balance. In the second model a multi-reservoir approach is applied, disaggregating the releases for each country water share in each reservoir. The non-linear disaggregation algorithm uses SDP-derived operating policies as boundary conditions for a local time-step optimization. Finally, the performance of the different approaches and methods is compared. These models are applied to the Amistad-Falcon International Reservoir System as part of a binational dynamic modeling effort to develop a decision support system tool for a better management of the water resources in the Lower Rio Grande Basin, currently enduring a severe drought.
Active–passive soil moisture retrievals during the SMAP validation experiment 2012
USDA-ARS?s Scientific Manuscript database
The goal of this study is to assess the performance of the active–passive algorithm for the NASA Soil Moisture Active Passive mission (SMAP) using airborne and ground observations from a field campaign. The SMAP active–passive algorithm disaggregates the coarse-resolution radiometer brightness tempe...
Converged photonic data storage and switch platform for exascale disaggregated data centers
NASA Astrophysics Data System (ADS)
Pitwon, R.; Wang, K.; Worrall, A.
2017-02-01
We report on a converged optically enabled Ethernet storage, switch and compute platform, which could support future disaggregated data center architectures. The platform includes optically enabled Ethernet switch controllers, an advanced electro-optical midplane and optically interchangeable generic end node devices. We demonstrate system level performance using optically enabled Ethernet disk drives and micro-servers across optical links of varied lengths.
Multisite rainfall downscaling and disaggregation in a tropical urban area
NASA Astrophysics Data System (ADS)
Lu, Y.; Qin, X. S.
2014-02-01
A systematic downscaling-disaggregation study was conducted over Singapore Island, with an aim to generate high spatial and temporal resolution rainfall data under future climate-change conditions. The study consisted of two major components. The first part was to perform an inter-comparison of various alternatives of downscaling and disaggregation methods based on observed data. This included (i) single-site generalized linear model (GLM) plus K-nearest neighbor (KNN) (S-G-K) vs. multisite GLM (M-G) for spatial downscaling, (ii) HYETOS vs. KNN for single-site disaggregation, and (iii) KNN vs. MuDRain (Multivariate Rainfall Disaggregation tool) for multisite disaggregation. The results revealed that, for multisite downscaling, M-G performs better than S-G-K in covering the observed data with a lower RMSE value; for single-site disaggregation, KNN could better keep the basic statistics (i.e. standard deviation, lag-1 autocorrelation and probability of wet hour) than HYETOS; for multisite disaggregation, MuDRain outperformed KNN in fitting interstation correlations. In the second part of the study, an integrated downscaling-disaggregation framework based on M-G, KNN, and MuDRain was used to generate hourly rainfall at multiple sites. The results indicated that the downscaled and disaggregated rainfall data based on multiple ensembles from HadCM3 for the period from 1980 to 2010 could well cover the observed mean rainfall amount and extreme data, and also reasonably keep the spatial correlations both at daily and hourly timescales. The framework was also used to project future rainfall conditions under HadCM3 SRES A2 and B2 scenarios. It was indicated that the annual rainfall amount could reduce up to 5% at the end of this century, but the rainfall of wet season and extreme hourly rainfall could notably increase.
Multivariate exploration of non-intrusive load monitoring via spatiotemporal pattern network
Liu, Chao; Akintayo, Adedotun; Jiang, Zhanhong; ...
2017-12-18
Non-intrusive load monitoring (NILM) of electrical demand for the purpose of identifying load components has thus far mostly been studied using univariate data, e.g., using only whole building electricity consumption time series to identify a certain type of end-use such as lighting load. However, using additional variables in the form of multivariate time series data may provide more information in terms of extracting distinguishable features in the context of energy disaggregation. In this work, a novel probabilistic graphical modeling approach, namely the spatiotemporal pattern network (STPN) is proposed for energy disaggregation using multivariate time-series data. The STPN framework is shownmore » to be capable of handling diverse types of multivariate time-series to improve the energy disaggregation performance. The technique outperforms the state of the art factorial hidden Markov models (FHMM) and combinatorial optimization (CO) techniques in multiple real-life test cases. Furthermore, based on two homes' aggregate electric consumption data, a similarity metric is defined for the energy disaggregation of one home using a trained model based on the other home (i.e., out-of-sample case). The proposed similarity metric allows us to enhance scalability via learning supervised models for a few homes and deploying such models to many other similar but unmodeled homes with significantly high disaggregation accuracy.« less
Multivariate exploration of non-intrusive load monitoring via spatiotemporal pattern network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chao; Akintayo, Adedotun; Jiang, Zhanhong
Non-intrusive load monitoring (NILM) of electrical demand for the purpose of identifying load components has thus far mostly been studied using univariate data, e.g., using only whole building electricity consumption time series to identify a certain type of end-use such as lighting load. However, using additional variables in the form of multivariate time series data may provide more information in terms of extracting distinguishable features in the context of energy disaggregation. In this work, a novel probabilistic graphical modeling approach, namely the spatiotemporal pattern network (STPN) is proposed for energy disaggregation using multivariate time-series data. The STPN framework is shownmore » to be capable of handling diverse types of multivariate time-series to improve the energy disaggregation performance. The technique outperforms the state of the art factorial hidden Markov models (FHMM) and combinatorial optimization (CO) techniques in multiple real-life test cases. Furthermore, based on two homes' aggregate electric consumption data, a similarity metric is defined for the energy disaggregation of one home using a trained model based on the other home (i.e., out-of-sample case). The proposed similarity metric allows us to enhance scalability via learning supervised models for a few homes and deploying such models to many other similar but unmodeled homes with significantly high disaggregation accuracy.« less
Generating Daily Synthetic Landsat Imagery by Combining Landsat and MODIS Data
Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao
2015-01-01
Owing to low temporal resolution and cloud interference, there is a shortage of high spatial resolution remote sensing data. To address this problem, this study introduces a modified spatial and temporal data fusion approach (MSTDFA) to generate daily synthetic Landsat imagery. This algorithm was designed to avoid the limitations of the conditional spatial temporal data fusion approach (STDFA) including the constant window for disaggregation and the sensor difference. An adaptive window size selection method is proposed in this study to select the best window size and moving steps for the disaggregation of coarse pixels. The linear regression method is used to remove the influence of differences in sensor systems using disaggregated mean coarse reflectance by testing and validation in two study areas located in Xinjiang Province, China. The results show that the MSTDFA algorithm can generate daily synthetic Landsat imagery with a high correlation coefficient (R) ranged from 0.646 to 0.986 between synthetic images and the actual observations. We further show that MSTDFA can be applied to 250 m 16-day MODIS MOD13Q1 products and the Landsat Normalized Different Vegetation Index (NDVI) data by generating a synthetic NDVI image highly similar to actual Landsat NDVI observation with a high R of 0.97. PMID:26393607
Generating Daily Synthetic Landsat Imagery by Combining Landsat and MODIS Data.
Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao
2015-09-18
Owing to low temporal resolution and cloud interference, there is a shortage of high spatial resolution remote sensing data. To address this problem, this study introduces a modified spatial and temporal data fusion approach (MSTDFA) to generate daily synthetic Landsat imagery. This algorithm was designed to avoid the limitations of the conditional spatial temporal data fusion approach (STDFA) including the constant window for disaggregation and the sensor difference. An adaptive window size selection method is proposed in this study to select the best window size and moving steps for the disaggregation of coarse pixels. The linear regression method is used to remove the influence of differences in sensor systems using disaggregated mean coarse reflectance by testing and validation in two study areas located in Xinjiang Province, China. The results show that the MSTDFA algorithm can generate daily synthetic Landsat imagery with a high correlation coefficient (R) ranged from 0.646 to 0.986 between synthetic images and the actual observations. We further show that MSTDFA can be applied to 250 m 16-day MODIS MOD13Q1 products and the Landsat Normalized Different Vegetation Index (NDVI) data by generating a synthetic NDVI image highly similar to actual Landsat NDVI observation with a high R of 0.97.
NASA Astrophysics Data System (ADS)
Lakshmi, V.; Mladenova, I. E.; Narayan, U.
2009-12-01
Soil moisture is known to be an essential factor in controlling the partitioning of rainfall into surface runoff and infiltration and solar energy into latent and sensible heat fluxes. Remote sensing has long proven its capability to obtain soil moisture in near real-time. However, at the present time we have the Advanced Scanning Microwave Radiometer (AMSR-E) on board NASA’s AQUA platform is the only satellite sensor that supplies a soil moisture product. AMSR-E coarse spatial resolution (~ 50 km at 6.9 GHz) strongly limits its applicability for small scale studies. A very promising technique for spatial disaggregation by combining radar and radiometer observations has been demonstrated by the authors using a methodology is based on the assumption that any change in measured brightness temperature and backscatter from one to the next time step is due primarily to change in soil wetness. The approach uses radiometric estimates of soil moisture at a lower resolution to compute the sensitivity of radar to soil moisture at the lower resolution. This estimate of sensitivity is then disaggregated using vegetation water content, vegetation type and soil texture information, which are the variables on which determine the radar sensitivity to soil moisture and are generally available at a scale of radar observation. This change detection algorithm is applied to several locations. We have used aircraft observed active and passive data over Walnut Creek watershed in Central Iowa in 2002; the Little Washita Watershed in Oklahoma in 2003 and the Murrumbidgee Catchment in southeastern Australia for 2006. All of these locations have different soils and land cover conditions which leads to a rigorous test of the disaggregation algorithm. Furthermore, we compare the derived high spatial resolution soil moisture to in-situ sampling and ground observation networks
NASA Astrophysics Data System (ADS)
Nallasamy, N. D.; Muraleedharan, B. V.; Kathirvel, K.; Narasimhan, B.
2014-12-01
Sustainable management of water resources requires reliable estimates of actual evapotranspiration (ET) at fine spatial and temporal resolution. This is significant in the case of rice based irrigation systems, one of the major consumers of surface water resources and where ET forms a major component of water consumption. However huge tradeoff in the spatial and temporal resolution of satellite images coupled with lack of adequate number of cloud free images within a growing season act as major constraints in deriving ET at fine spatial and temporal resolution using remote sensing based energy balance models. The scale at which ET is determined is decided by the spatial and temporal scale of Land Surface Temperature (LST) and Normalized Difference Vegetation Index (NDVI), which form inputs to energy balance models. In this context, the current study employed disaggregation algorithms (NL-DisTrad and DisNDVI) to generate time series of LST and NDVI images at fine resolution. The disaggregation algorithms aimed at generating LST and NDVI at finer scale by integrating temporal information from concurrent coarse resolution data and spatial information from a single fine resolution image. The temporal frequency of the disaggregated images is further improved by employing composite images of NDVI and LST in the spatio-temporal disaggregation method. The study further employed half-hourly incoming surface insolation and outgoing long wave radiation obtained from the Indian geostationary satellite (Kalpana-1) to convert the instantaneous ET into daily ET and subsequently to the seasonal ET, thereby improving the accuracy of ET estimates. The estimates of ET were validated with field based water balance measurements carried out in Gadana, a subbasin predominated by rice paddy fields, located in Tamil Nadu, India.
A multi-level solution algorithm for steady-state Markov chains
NASA Technical Reports Server (NTRS)
Horton, Graham; Leutenegger, Scott T.
1993-01-01
A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Horton, Graham
1994-01-01
Recently the Multi-Level algorithm was introduced as a general purpose solver for the solution of steady state Markov chains. In this paper, we consider the performance of the Multi-Level algorithm for solving Nearly Completely Decomposable (NCD) Markov chains, for which special-purpose iteractive aggregation/disaggregation algorithms such as the Koury-McAllister-Stewart (KMS) method have been developed that can exploit the decomposability of the the Markov chain. We present experimental results indicating that the general-purpose Multi-Level algorithm is competitive, and can be significantly faster than the special-purpose KMS algorithm when Gauss-Seidel and Gaussian Elimination are used for solving the individual blocks.
NASA Astrophysics Data System (ADS)
Ansari Amoli, Abdolreza; Lopez-Baeza, Ernesto; Mahmoudi, Ali; Mahmoodi, Ali
2016-07-01
Synergistic Use of SMOS Measurements with SMAP Derived and In-situ Data over the Valencia Anchor Station by Using a Downscaling Technique Ansari Amoli, A.(1),Mahmoodi, A.(2) and Lopez-Baeza, E.(3) (1) Department of Earth Physics and Thermodynamics, University of Valencia, Spain (2) Centre d'Etudes Spatiales de la BIOsphère (CESBIO), France (3) Department of Earth Physics and Thermodynamics, University of Valencia, Spain Soil moisture products from active sensors are not operationally available. Passive remote sensors return more accurate estimates, but their resolution is much coarser. One solution to overcome this problem is the synergy between radar and radiometric data by using disaggregation (downscaling) techniques. Few studies have been conducted to merge high resolution radar and coarse resolution radiometer measurements in order to obtain an intermediate resolution product. In this paper we present an algorithm using combined available SMAP (Soil Moisture Active and Passive) radar and SMOS (Soil Moisture and Ocean Salinity) radiometer measurements to estimate surface soil moisture over the Valencia Anchor Station (VAS), Valencia, Spain. The goal is to combine the respective attributes of the radar and radiometer observations to estimate soil moisture at a resolution of 3 km. The algorithm disaggregates the coarse resolution SMOS (15 km) radiometer brightness temperature product based on the spatial variation of the high resolution SMAP (3 km) radar backscatter. The disaggregation of the radiometer brightness temperature uses the radar backscatter spatial patterns within the radiometer footprint that are inferred from the radar measurements. For this reason the radar measurements within the radiometer footprint are scaled by parameters that are derived from the temporal fluctuations in the radar and radiometer measurements.
McKenzie, Briar; Santos, Joseph Alvin; Trieu, Kathy; Thout, Sudhir Raj; Johnson, Claire; Arcand, JoAnne; Webster, Jacqui; McLean, Rachael
2018-05-01
The aim of the current review was to examine the scope of studies published in the Science of Salt Weekly that contained a measure of self-reported knowledge, attitudes, and behavior (KAB) concerning salt. Specific objectives were to examine how KAB measures are used to evaluate salt reduction intervention studies, the questionnaires used, and whether any gender differences exist in self-reported KAB. Studies were reviewed from the commencement of Science of Salt Weekly, June 2013 to the end of August 2017. Seventy-five studies had relevant measures of KAB and were included in this review, 13 of these were salt-reduction intervention-evaluation studies, with the remainder (62) being descriptive KAB studies. The KAB questionnaires used were specific to the populations studied, without evidence of a best practice measure. 40% of studies used KAB alone as the primary outcome measure; the remaining studies used more quantitative measures of salt intake such as 24-hour urine. Only half of the descriptive studies showed KAB outcomes disaggregated by gender, and of those, 73% showed women had more favorable KAB related to salt. None of the salt intervention-evaluation studies showed disaggregated KAB data. Therefore, it is likely important that evaluation studies disaggregate, and are appropriately powered to disaggregate all outcomes by gender to address potential disparities. ©2018 Wiley Periodicals, Inc.
On the synergy of SMAP, AMSR2 AND SENTINEL-1 for retrieving soil moisture
NASA Astrophysics Data System (ADS)
Santi, E.; Paloscia, S.; Pettinato, S.; Brocca, L.; Ciabatta, L.; Entekhabi, D.
2018-03-01
An algorithm for retrieving soil moisture content (SMC) from synergic use of both active and passive microwave acquisitions is presented. The algorithm takes advantage of the integration of microwave data from SMAP, Sentinel-1 and AMSR2 for overcoming the SMAP radar failure and obtaining a SMC product at enhanced resolution (0.1° × 0.1°) and improved accuracy with respect to the original SMAP radiometric SMC product. A disaggregation technique based on the Smoothing filter based intensity modulation (SFIM) allows combining the radiometric and SAR data. Disaggregated microwave data are used as inputs of an Artificial Neural Networks (ANN) based algorithm, which is able to exploit the synergy between active and passive acquisitions. The algorithm is defined, trained and tested using the SMEX02 experimental dataset and data simulated by forward electromagnetic models based on the Radiative Transfer Theory. Then the algorithm is adapted to satellite data and tested using one year of SMAP, AMSR2 and Sentinel-1 co-located data on a flat agricultural area located in the Po Valley, in northern Italy. Spatially distributed SMC values at 0.1° × 0.1° resolution generated by the Soil Water Balance Model (SWBM) are considered as reference for this purpose. The synergy of SMAP, Sentinel-1 and AMSR2 allowed increasing the correlation between estimated and reference SMC from R ≅ 0.68 of the SMAP based retrieval up to R ≅ 0.86 of the combination SMAP + Sentinel-1 + AMSR2. The corresponding Root Mean Square Error (RMSE) decreased from RMSE ≅ 0.04 m3/m3 to RMSE ≅ 0.024 m3/m3.
Stevens, Forrest R; Gaughan, Andrea E; Linard, Catherine; Tatem, Andrew J
2015-01-01
High resolution, contemporary data on human population distributions are vital for measuring impacts of population growth, monitoring human-environment interactions and for planning and policy development. Many methods are used to disaggregate census data and predict population densities for finer scale, gridded population data sets. We present a new semi-automated dasymetric modeling approach that incorporates detailed census and ancillary data in a flexible, "Random Forest" estimation technique. We outline the combination of widely available, remotely-sensed and geospatial data that contribute to the modeled dasymetric weights and then use the Random Forest model to generate a gridded prediction of population density at ~100 m spatial resolution. This prediction layer is then used as the weighting surface to perform dasymetric redistribution of the census counts at a country level. As a case study we compare the new algorithm and its products for three countries (Vietnam, Cambodia, and Kenya) with other common gridded population data production methodologies. We discuss the advantages of the new method and increases over the accuracy and flexibility of those previous approaches. Finally, we outline how this algorithm will be extended to provide freely-available gridded population data sets for Africa, Asia and Latin America.
ERIC Educational Resources Information Center
Rooney, Joanne
2009-01-01
Principals talk about children in their schools, noting that their essence cannot be condensed into efficiently scored, disaggregated data. Data tools are useful. But even a dramatic increase in test scores headlined in the newspaper must not become the end product of principals' educational endeavors. What's going on in the hearts and minds of…
Residential energy use in Mexico: Structure, evolution, environmental impacts, and savings potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masera, O.; Friedmann, R.; deBuen, O.
This article examines the characteristics of residential energy use in Mexico, its environmental impacts, and the savings potential of the major end-uses. The main options and barriers to increase the efficiency of energy use are discussed. The energy analysis is based on a disaggregation of residential energy use by end-uses. The dynamics of the evolution of the residential energy sector during the past 20 years are also addressed when the information is available. Major areas for research and for innovative decision-making are identified and prioritized.
Stevens, Forrest R.; Gaughan, Andrea E.; Linard, Catherine; Tatem, Andrew J.
2015-01-01
High resolution, contemporary data on human population distributions are vital for measuring impacts of population growth, monitoring human-environment interactions and for planning and policy development. Many methods are used to disaggregate census data and predict population densities for finer scale, gridded population data sets. We present a new semi-automated dasymetric modeling approach that incorporates detailed census and ancillary data in a flexible, “Random Forest” estimation technique. We outline the combination of widely available, remotely-sensed and geospatial data that contribute to the modeled dasymetric weights and then use the Random Forest model to generate a gridded prediction of population density at ~100 m spatial resolution. This prediction layer is then used as the weighting surface to perform dasymetric redistribution of the census counts at a country level. As a case study we compare the new algorithm and its products for three countries (Vietnam, Cambodia, and Kenya) with other common gridded population data production methodologies. We discuss the advantages of the new method and increases over the accuracy and flexibility of those previous approaches. Finally, we outline how this algorithm will be extended to provide freely-available gridded population data sets for Africa, Asia and Latin America. PMID:25689585
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xiangqi; Wang, Jiyu; Mulcahy, David
This paper presents a voltage-load sensitivity matrix (VLSM) based voltage control method to deploy demand response resources for controlling voltage in high solar penetration distribution feeders. The IEEE 123-bus system in OpenDSS is used for testing the performance of the preliminary VLSM-based voltage control approach. A load disaggregation process is applied to disaggregate the total load profile at the feeder head to each load nodes along the feeder so that loads are modeled at residential house level. Measured solar generation profiles are used in the simulation to model the impact of solar power on distribution feeder voltage profiles. Different casemore » studies involving various PV penetration levels and installation locations have been performed. Simulation results show that the VLSM algorithm performance meets the voltage control requirements and is an effective voltage control strategy.« less
Selecting Cases for Intensive Analysis: A Diversity of Goals and Methods
ERIC Educational Resources Information Center
Gerring, John; Cojocaru, Lee
2016-01-01
This study revisits the task of case selection in case study research, proposing a new typology of strategies that is explicit, disaggregated, and relatively comprehensive. A secondary goal is to explore the prospects for case selection by "algorithm," aka "ex ante," "automatic," "quantitative,"…
Commercial Buildings Energy Consumption Survey 2012 - Detailed Tables
2016-01-01
The 2012 CBECS consumption and expenditures detailed tables are comprised of Tables C1-C38, which cover overall electricity, natural gas, fuel oil and district heat consumption, and tables E1-E11, which disaggregate the same energy sources by end use (heating, cooling, lighting, etc.). All of the detailed tables contain extensive row categories of building characteristics.
End-of-Grade (EOG) Multiple-Choice Test Results, 2008-09. Measuring Up. E&R Report No. 10.12
ERIC Educational Resources Information Center
McMillen, Brad
2010-01-01
In 2008-09, results from End-of-Grade (EOG) reading and mathematics tests in WCPSS continued to demonstrate an upward trend across grade levels and student subgroups. Disaggregation of results by ethnicity, income level, disability status, and English proficiency status showed that achievement gaps between historically underperforming subgroups…
Spatial weighting approach in numerical method for disaggregation of MDGs indicators
NASA Astrophysics Data System (ADS)
Permai, S. D.; Mukhaiyar, U.; Satyaning PP, N. L. P.; Soleh, M.; Aini, Q.
2018-03-01
Disaggregation use to separate and classify the data based on certain characteristics or on administrative level. Disaggregated data is very important because some indicators not measured on all characteristics. Detailed disaggregation for development indicators is important to ensure that everyone benefits from development and support better development-related policymaking. This paper aims to explore different methods to disaggregate national employment-to-population ratio indicator to province- and city-level. Numerical approach applied to overcome the problem of disaggregation unavailability by constructing several spatial weight matrices based on the neighbourhood, Euclidean distance and correlation. These methods can potentially be used and further developed to disaggregate development indicators into lower spatial level even by several demographic characteristics.
Disaggregated Imaging Spacecraft Constellation Optimization with a Genetic Algorithm
2014-03-27
Management Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the Requirements for the Degree...distinct mod- ules which, once ‘assembled’ on orbit, deliver the capability of the original monolithic system [5].” Jerry Sellers includes a comic in
Modeling Stochastic Energy and Water Consumption to Manage Residential Water Uses
NASA Astrophysics Data System (ADS)
Abdallah, A. M.; Rosenberg, D. E.; Water; Energy Conservation
2011-12-01
Water energy linkages have received growing attention from the water and energy utilities as utilities recognize that collaborative efforts can implement more effective conservation and efficiency improvement programs at lower cost with less effort. To date, limited energy-water household data has allowed only deterministic analysis for average, representative households and required coarse assumptions - like the water heater (the primary energy use in a home apart from heating and cooling) be a single end use. Here, we use recent available disaggregated hot and cold water household end-use data to estimate water and energy consumption for toilet, shower, faucet, dishwasher, laundry machine, leaks, and other household uses and savings from appliance retrofits. The disaggregated hot water and bulk water end-use data was previously collected by the USEPA for 96 single family households in Seattle WA and Oakland CA, and Tampa FL between the period from 2000 and 2003 for two weeks before and four weeks after each household was retrofitted with water efficient appliances. Using the disaggregated data, we developed a stochastic model that represents factors that influence water use for each appliance: behavioral (use frequency and duration), demographical (household size), and technological (use volume or flowrate). We also include stochastic factors that govern energy to heat hot water: hot water fraction (percentage of hot water volume to total water volume used in a certain end-use event), heater water intake and dispense temperatures, and energy source for the heater (gas, electric, etc). From the empirical household end-use data, we derive stochastic probability distributions for each water and energy factor where each distribution represents the range and likelihood of values that the factor may take. The uncertainty of the stochastic water and energy factors is propagated using Monte Carlo simulations to calculate the composite probability distribution for water and energy use, potential savings, and payback periods to install efficient water end-use appliances and fixtures. Stochastic model results show the distributions among households for (i) water end-use, (ii) energy consumed to use water, and (iii) financial payback periods. Compared to deterministic analysis, stochastic modeling results show that hot water fractions for appliances follow normal distributions with high standard deviation and reveal pronounced variations among households that significantly affect energy savings and payback period estimates. These distributions provide an important tool to select and size water conservation programs to simultaneously meet both water and energy conservation goals. They also provide a way to identify and target a small fraction of customers with potential to save large water volumes and energy from appliance retrofits. Future work will embed this household scale stochastic model in city-scale models to identify win-win water management opportunities where households save money by conserving water and energy while cities avoid costs, downsize, or delay infrastructure development.
India Energy Outlook: End Use Demand in India to 2020
DOE Office of Scientific and Technical Information (OSTI.GOV)
de la Rue du Can, Stephane; McNeil, Michael; Sathaye, Jayant
Integrated economic models have been used to project both baseline and mitigation greenhouse gas emissions scenarios at the country and the global level. Results of these scenarios are typically presented at the sectoral level such as industry, transport, and buildings without further disaggregation. Recently, a keen interest has emerged on constructing bottom up scenarios where technical energy saving potentials can be displayed in detail (IEA, 2006b; IPCC, 2007; McKinsey, 2007). Analysts interested in particular technologies and policies, require detailed information to understand specific mitigation options in relation to business-as-usual trends. However, the limit of information available for developing countries oftenmore » poses a problem. In this report, we have focus on analyzing energy use in India in greater detail. Results shown for the residential and transport sectors are taken from a previous report (de la Rue du Can, 2008). A complete picture of energy use with disaggregated levels is drawn to understand how energy is used in India and to offer the possibility to put in perspective the different sources of end use energy consumption. For each sector, drivers of energy and technology are indentified. Trends are then analyzed and used to project future growth. Results of this report provide valuable inputs to the elaboration of realistic energy efficiency scenarios.« less
Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons
2014-01-01
Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829
NASA Astrophysics Data System (ADS)
Vincent, Sébastien; Lemercier, Blandine; Berthier, Lionel; Walter, Christian
2015-04-01
Accurate soil information over large extent is essential to manage agronomical and environmental issues. Where it exists, information on soil is often sparse or available at coarser resolution than required. Typically, the spatial distribution of soil at regional scale is represented as a set of polygons defining soil map units (SMU), each one describing several soil types not spatially delineated, and a semantic database describing these objects. Delineation of soil types within SMU, ie spatial disaggregation of SMU allows improved soil information's accuracy using legacy data. The aim of this study was to predict soil types by spatial disaggregation of SMU through a decision tree approach, considering expert knowledge on soil-landscape relationships embedded in soil databases. The DSMART (Disaggregation and Harmonization of Soil Map Units Through resampled Classification Trees) algorithm developed by Odgers et al. (2014) was used. It requires soil information, environmental covariates, and calibration samples, to build then extrapolate decision trees. To assign a soil type to a particular spatial position, a weighed random allocation approach is applied: each soil type in the SMU is weighted according to its assumed proportion of occurrence in the SMU. Thus soil-landscape relationships are not considered in the current version of DSMART. Expert rules on soil distribution considering the relief, parent material and wetlands location were proposed to drive the procedure of allocation of soil type to sampled positions, in order to integrate the soil-landscape relationships. Semantic information about spatial organization of soil types within SMU and exhaustive landscape descriptors were used. In the eastern part of Brittany (NW France), 171 soil types were described; their relative area in the SMU were estimated, geomorphological and geological contexts were recorded. The model predicted 144 soil types. An external validation was performed by comparing predicted with effectively observed soil types derived from available soil maps at scale of 1:25.000 or 1:50.000. Overall accuracies were 63.1% and 36.2%, respectively considering or not the adjacent pixels. The introduction of expert rules based on soil-landscape relationships to allocate soil types to calibration samples enhanced dramatically the results in comparison with a simple weighted random allocation procedure. It also enabled the production of a comprehensive soil map, retrieving expected spatial organization of soils. Estimation of soil properties for various depths is planned using disaggregated soil types, according to the GlobalSoilmap.net specifications. Odgers, N.P., Sun, W., McBratney, A.B., Minasny, B., Clifford, D., 2014. Disaggregating and harmonising soil map units through resampled classification trees. Geoderma 214, 91-100.
Hourly disaggregation of industrial CO2 emissions from Shenzhen, China.
Ma, Li; Cai, Bofeng; Wu, Feng; Zeng, Hui
2018-05-01
Shenzhen's total industrial CO 2 emission was calculated using the IPCC recommended bottom-up approach and data obtained from the China High Resolution Emission Gridded Data (CHRED). Monthly product yield was then used as the proxy to disaggregate a facility's total emission into monthly emissions. Since a thermal power unit's emission changes with daily and hourly power loads, typical power load curves were used as the proxy to disaggregate the monthly emissions on a daily and hourly basis. The daily and hourly emissions of other facilities were calculated according to two specially designed models: the "weekdays + Spring Festival holidays" model for February and the "weekdays + weekends" model for non-February months. The uncertainty ranges associated with the process of the total amount calculation, monthly disaggregation, daily disaggregation and hourly disaggregation were quantitatively estimated. The total combined uncertainty of the hourly disaggregation of "weekdays + weekends" mode was ±26.19%, and that of the "weekdays + Spring Festival holidays" mode was ±33.06%. These temporal-disaggregation methods and uncertainty estimate approaches could also be used for the industrial air pollutant emission inventory and easily reproduced in the whole country. Copyright © 2018 Elsevier Ltd. All rights reserved.
Forum Guide to Collecting and Using Disaggregated Data on Racial/Ethnic Subgroups. NFES 2017-017
ERIC Educational Resources Information Center
National Forum on Education Statistics, 2016
2016-01-01
The National Forum on Education Statistics convened the Data Disaggregation of Racial/Ethnic Subgroups Working Group to identify best practices for disaggregating data on racial/ethnic subgroups. This guide is intended to identify some of the overarching benefits and challenges involved in data disaggregation; recommend appropriate practices for…
Planar polymer and glass graded index waveguides for data center applications
NASA Astrophysics Data System (ADS)
Pitwon, Richard; Yamauchi, Akira; Brusberg, Lars; Wang, Kai; Ishigure, Takaaki; Schröder, Henning; Neitz, Marcel; Worrall, Alex
2016-03-01
Embedded optical waveguide technology for optical printed circuit boards (OPCBs) has advanced considerably over the past decade both in terms of materials and achievable waveguide structures. Two distinct classes of planar graded index multimode waveguide have recently emerged based on polymer and glass materials. We report on the suitability of graded index polymer waveguides, fabricated using the Mosquito method, and graded index glass waveguides, fabricated using ion diffusion on thin glass foils, for deployment within future data center environments as part of an optically disaggregated architecture. To this end, we first characterize the wavelength dependent performance of different waveguide types to assess their suitability with respect to two dominant emerging multimode transceiver classes based on directly modulated 850 nm VCSELs and 1310 silicon photonics devices. Furthermore we connect the different waveguide types into an optically disaggregated data storage system and characterize their performance with respect to different common high speed data protocols used at the intra and inter rack level including 10 Gb Ethernet and Serial Attached SCSI.
NASA Astrophysics Data System (ADS)
Cominola, A.; Nanda, R.; Giuliani, M.; Piga, D.; Castelletti, A.; Rizzoli, A. E.; Maziotis, A.; Garrone, P.; Harou, J. J.
2014-12-01
Designing effective urban water demand management strategies at the household level does require a deep understanding of the determinants of users' consumption. Low resolution data on residential water consumption, as traditionally metered, can only be used to model consumers' behavior at an aggregate level whereas end uses breakdown and the motivations and individual attitudes of consumers are hidden. The recent advent of smart meters allows gathering high frequency consumption data that can be used both to provide instantaneous information to water utilities on the state of the network and continuously inform the users on their consumption and savings. Smart metered data also allow for the characterization of water end uses: this information, coupled with users' psychographic variables, constitutes the knowledge basis for developing individual and multi users models, through which water utilities can test the impact of different management strategies. SmartH2O is an EU funded project which aims at creating an ICT platform able to (i) capture and store quasi real time, high resolution residential water usage data measured with smart meters, (ii) infer the main determinants of residential water end uses and build customers' behavioral models and (iii) predict how the customer behavior can be influenced by various water demand management strategies, spanning from dynamic water pricing schemes to social awareness campaigns. The project exploits a social computing approach for raising users' awareness about water consumption and pursuing water savings in the residential sector. In this work, we first present the SmartH2O platform and data collection, storage and analysis components. We then introduce some preliminary models and results on total water consumption disaggregation into end uses and single user behaviors using innovative fully automated algorithms and overcoming the need of invasive metering campaigns at the fixture level.
Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.
Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C
2015-02-01
Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Bayesian Non-Stationary Index Gauge Modeling of Gridded Precipitation Extremes
NASA Astrophysics Data System (ADS)
Verdin, A.; Bracken, C.; Caldwell, J.; Balaji, R.; Funk, C. C.
2017-12-01
We propose a Bayesian non-stationary model to generate watershed scale gridded estimates of extreme precipitation return levels. The Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) dataset is used to obtain gridded seasonal precipitation extremes over the Taylor Park watershed in Colorado for the period 1981-2016. For each year, grid cells within the Taylor Park watershed are aggregated to a representative "index gauge," which is input to the model. Precipitation-frequency curves for the index gauge are estimated for each year, using climate variables with significant teleconnections as proxies. Such proxies enable short-term forecasting of extremes for the upcoming season. Disaggregation ratios of the index gauge to the grid cells within the watershed are computed for each year and preserved to translate the index gauge precipitation-frequency curve to gridded precipitation-frequency maps for select return periods. Gridded precipitation-frequency maps are of the same spatial resolution as CHIRPS (0.05° x 0.05°). We verify that the disaggregation method preserves spatial coherency of extremes in the Taylor Park watershed. Validation of the index gauge extreme precipitation-frequency method consists of ensuring extreme value statistics are preserved on a grid cell basis. To this end, a non-stationary extreme precipitation-frequency analysis is performed on each grid cell individually, and the resulting frequency curves are compared to those produced by the index gauge disaggregation method.
Carboni, Davide; Gluhak, Alex; McCann, Julie A.; Beach, Thomas H.
2016-01-01
Water monitoring in households is important to ensure the sustainability of fresh water reserves on our planet. It provides stakeholders with the statistics required to formulate optimal strategies in residential water management. However, this should not be prohibitive and appliance-level water monitoring cannot practically be achieved by deploying sensors on every faucet or water-consuming device of interest due to the higher hardware costs and complexity, not to mention the risk of accidental leakages that can derive from the extra plumbing needed. Machine learning and data mining techniques are promising techniques to analyse monitored data to obtain non-intrusive water usage disaggregation. This is because they can discern water usage from the aggregated data acquired from a single point of observation. This paper provides an overview of water usage disaggregation systems and related techniques adopted for water event classification. The state-of-the art of algorithms and testbeds used for fixture recognition are reviewed and a discussion on the prominent challenges and future research are also included. PMID:27213397
Methodology to Assess No Touch Audit Software Using Field Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Jie; Braun, James E.; Langner, M. Rois
The research presented in this report builds upon these previous efforts and proposes a set of tests to assess no touch audit tools using real utility bill and on-site data. The proposed assessment methodology explicitly investigates the behaviors of the monthly energy end uses with respect to outdoor temperature, i.e., the building energy signature, to help understand the Tool's disaggregation accuracy. The project team collaborated with Field Diagnosis Services, Inc. (FDSI) to identify appropriate test sites for the evaluation.
Validating Savings Claims of Cold Climate Zero Energy Ready Homes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, J.; Puttagunta, S.
This report details the validation methods used to analyze consumption at each of these homes. It includes a detailed end-use examination of consumptions from the following categories: 1) Heating, 2) Cooling, 3) Lights, Appliances, and Miscellaneous Electric Loads (LAMELS) along with Domestic Hot Water Use, 4) Ventilation, and 5) PV generation. A utility bill disaggregation method, which allows a crude estimation of space conditioning loads based on outdoor air temperature, was also performed and the results compared to the actual measured data.
47 CFR 22.948 - Partitioning and Disaggregation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... partition or disaggregate their spectrum to other qualified entities. (2) Partitioning. During the five year... obtaining disaggregated spectrum may only use such spectrum in that portion of the cellular market encompassed by the original licensee's CGSA and may not use such spectrum to provide service to unserved...
47 CFR 22.948 - Partitioning and Disaggregation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... partition or disaggregate their spectrum to other qualified entities. (2) Partitioning. During the five year... obtaining disaggregated spectrum may only use such spectrum in that portion of the cellular market encompassed by the original licensee's CGSA and may not use such spectrum to provide service to unserved...
47 CFR 22.948 - Partitioning and Disaggregation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... partition or disaggregate their spectrum to other qualified entities. (2) Partitioning. During the five year... obtaining disaggregated spectrum may only use such spectrum in that portion of the cellular market encompassed by the original licensee's CGSA and may not use such spectrum to provide service to unserved...
47 CFR 22.948 - Partitioning and Disaggregation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... partition or disaggregate their spectrum to other qualified entities. (2) Partitioning. During the five year... obtaining disaggregated spectrum may only use such spectrum in that portion of the cellular market encompassed by the original licensee's CGSA and may not use such spectrum to provide service to unserved...
A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models
NASA Astrophysics Data System (ADS)
Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.
2010-09-01
For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.
Characteristics and Performance of Existing Load Disaggregation Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Sullivan, Greg P.; Butner, Ryan S.
2015-04-10
Non-intrusive load monitoring (NILM) or non-intrusive appliance load monitoring (NIALM) is an analytic approach to disaggregate building loads based on a single metering point. This advanced load monitoring and disaggregation technique has the potential to provide an alternative solution to high-priced traditional sub-metering and enable innovative approaches for energy conservation, energy efficiency, and demand response. However, since the inception of the concept in the 1980’s, evaluations of these technologies have focused on reporting performance accuracy without investigating sources of inaccuracies or fully understanding and articulating the meaning of the metrics used to quantify performance. As a result, the market for,more » as well as, advances in these technologies have been slowly maturing.To improve the market for these NILM technologies, there has to be confidence that the deployment will lead to benefits. In reality, every end-user and application that this technology may enable does not require the highest levels of performance accuracy to produce benefits. Also, there are other important characteristics that need to be considered, which may affect the appeal of NILM products to certain market targets (i.e. residential and commercial building consumers) and the suitability for particular applications. These characteristics include the following: 1) ease of use, the level of expertise/bandwidth required to properly use the product; 2) ease of installation, the level of expertise required to install along with hardware needs that impact product cost; and 3) ability to inform decisions and actions, whether the energy outputs received by end-users (e.g. third party applications, residential users, building operators, etc.) empower decisions and actions to be taken at time frames required for certain applications. Therefore, stakeholders, researchers, and other interested parties should be kept abreast of the evolving capabilities, uses, and characteristics of NILM that make them attractive for certain building environments and different classes of end-users. The intent of this report is to raise awareness of trending NILM approaches. Additionally, three existing technologies were acquired and evaluated using the Residential Building Stock Assessment (RBSA) owner-occupied test bed operated by the Northwest Energy Efficiency Alliance (NEEA) to understand performance accuracy of current NILM products under realistic conditions. Based on this field study experience, the characteristics exhibited by the NILM products included in the assessment are also discussed in this report in terms of ease of use, ease of installation, ability to inform decisions and actions. Results of the analysis performed to investigate the accuracy of the participating NILM products in estimating energy use of individual appliances are also presented.« less
Decentralized energy studies: Compendium of international studies and research
NASA Astrophysics Data System (ADS)
Wallace, C.
1980-03-01
With efficient use of energy, renewable energy sources can supply the majority, if not the totality, of energy supplies in developed nations at real energy prices that double or triple by 2025 (1975 prices). This appears true even in harsh climates with oil dependent industrial economies. Large increases in end-use energy efficiency are cost effective at present prices. Some reports show that cost effective end-use efficiency improvements can reduce energy consumption (per capita, per unit of amenity, or per unit of output) to as much as 90 percent. This was demonstrated by highly disaggregated analyses of end-uses. Such analyses consistently show larger potential for efficiency improvements than can be detected from conventional analyses of more aggregated data. As energy use demands decline due to end use efficiency improvements, energy supply problems subsequently decrease. Lifestyle changes, influenced by social factors, and rising energy prices can substantially reduce demands for energy. Such changes are already discernible in end-use energy studies. When energy efficient capital stock is in place, many end-users of energy will be able to provide a substantial portion of their own energy needs from renewable energy sources that are directly available to them.
47 CFR 90.813 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Partitioned licenses and disaggregated spectrum... Specialized Mobile Radio Service § 90.813 Partitioned licenses and disaggregated spectrum. (a) Eligibility.... Spectrum may be disaggregated in any amount. (3) Combined partitioning and disaggregation. The Commission...
47 CFR 90.813 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false Partitioned licenses and disaggregated spectrum... Specialized Mobile Radio Service § 90.813 Partitioned licenses and disaggregated spectrum. (a) Eligibility.... Spectrum may be disaggregated in any amount. (3) Combined partitioning and disaggregation. The Commission...
47 CFR 90.813 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Partitioned licenses and disaggregated spectrum... Specialized Mobile Radio Service § 90.813 Partitioned licenses and disaggregated spectrum. (a) Eligibility.... Spectrum may be disaggregated in any amount. (3) Combined partitioning and disaggregation. The Commission...
47 CFR 90.813 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Partitioned licenses and disaggregated spectrum... Specialized Mobile Radio Service § 90.813 Partitioned licenses and disaggregated spectrum. (a) Eligibility.... Spectrum may be disaggregated in any amount. (3) Combined partitioning and disaggregation. The Commission...
A Peltier-based freeze-thaw device for meteorite disaggregation
NASA Astrophysics Data System (ADS)
Ogliore, R. C.
2018-02-01
A Peltier-based freeze-thaw device for the disaggregation of meteorite or other rock samples is described. Meteorite samples are kept in six water-filled cavities inside a thin-walled Al block. This block is held between two Peltier coolers that are automatically cycled between cooling and warming. One cycle takes approximately 20 min. The device can run unattended for months, allowing for ˜10 000 freeze-thaw cycles that will disaggregate meteorites even with relatively low porosity. This device was used to disaggregate ordinary and carbonaceous chondrite regoltih breccia meteorites to search for micrometeoroid impact craters.
ERIC Educational Resources Information Center
Parker, Eugene T., III.; Kilgo, Cindy A.; Sheets, Jessica K. Ezell; Pascarella, Ernest T.
2016-01-01
The purpose of this study is to explore the effects of internship participation on college students, specifically the effect on college GPA. Further, because of the capacity to disaggregate students by race in the sample, this study is significant because it provides much needed empirical evidence surrounding the impact of participation in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lee, Jina; Lefantzi, Sophia
2013-09-01
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. The limited nature of the measured data leads to a severely-underdetermined estimation problem. If the estimation is performed at fine spatial resolutions, it can also be computationally expensive. In order to enable such estimations, advances are needed in the spatial representation of ffCO2 emissions, scalable inversion algorithms and the identification of observables to measure. To that end, we investigate parsimonious spatial parameterizations of ffCO2 emissions whichmore » can be used in atmospheric inversions. We devise and test three random field models, based on wavelets, Gaussian kernels and covariance structures derived from easily-observed proxies of human activity. In doing so, we constructed a novel inversion algorithm, based on compressive sensing and sparse reconstruction, to perform the estimation. We also address scalable ensemble Kalman filters as an inversion mechanism and quantify the impact of Gaussian assumptions inherent in them. We find that the assumption does not impact the estimates of mean ffCO2 source strengths appreciably, but a comparison with Markov chain Monte Carlo estimates show significant differences in the variance of the source strengths. Finally, we study if the very different spatial natures of biogenic and ffCO2 emissions can be used to estimate them, in a disaggregated fashion, solely from CO2 concentration measurements, without extra information from products of incomplete combustion e.g., CO. We find that this is possible during the winter months, though the errors can be as large as 50%.« less
Representation of spatial cross correlations in large stochastic seasonal streamflow models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira, G.C.; Kelman, J.; Pereira, M.V.F.
1988-05-01
Pereira et al. (1984) presented a special disaggregation procedure for generating cross-correlated monthly flows at many sites while using what are essentially univariate disaggregation models for the flows at each site. This was done by using a nonparametric procedure for constructing residual innovations or noise vectors with cross-correlated components. This note considers the theoretical underpinnings of that streamflow disaggregation procedure and a proposed variation and their ability to reproduce the observed historical cross correlations among concurrent monthly flows at nine Brazilian stations.
Mobile Sensing in Environmental Health and Neighborhood Research.
Chaix, Basile
2018-04-01
Public health research has witnessed a rapid development in the use of location, environmental, behavioral, and biophysical sensors that provide high-resolution objective time-stamped data. This burgeoning field is stimulated by the development of novel multisensor devices that collect data for an increasing number of channels and algorithms that predict relevant dimensions from one or several data channels. Global positioning system (GPS) tracking, which enables geographic momentary assessment, permits researchers to assess multiplace personal exposure areas and the algorithm-based identification of trips and places visited, eventually validated and complemented using a GPS-based mobility survey. These methods open a new space-time perspective that considers the full dynamic of residential and nonresidential momentary exposures; spatially and temporally disaggregates the behavioral and health outcomes, thus replacing them in their immediate environmental context; investigates complex time sequences; explores the interplay among individual, environmental, and situational predictors; performs life-segment analyses considering infraindividual statistical units using case-crossover models; and derives recommendations for just-in-time interventions.
47 CFR 101.1323 - Spectrum aggregation, disaggregation, and partitioning.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Spectrum aggregation, disaggregation, and... Requirements § 101.1323 Spectrum aggregation, disaggregation, and partitioning. (a) Eligibility. (1) Parties... aggregate spectrum in any MAS bands, but may not disaggregate their licensed spectrum or partition their...
47 CFR 101.1323 - Spectrum aggregation, disaggregation, and partitioning.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Spectrum aggregation, disaggregation, and... Requirements § 101.1323 Spectrum aggregation, disaggregation, and partitioning. (a) Eligibility. (1) Parties... aggregate spectrum in any MAS bands, but may not disaggregate their licensed spectrum or partition their...
47 CFR 90.365 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false Partitioned licenses and disaggregated spectrum... § 90.365 Partitioned licenses and disaggregated spectrum. (a) Eligibility. (1) Party seeking approval... disaggregate their licensed spectrum at any time following the grant of their licenses. Multilateration LMS...
47 CFR 90.365 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Partitioned licenses and disaggregated spectrum... § 90.365 Partitioned licenses and disaggregated spectrum. (a) Eligibility. (1) Party seeking approval... disaggregate their licensed spectrum at any time following the grant of their licenses. Multilateration LMS...
47 CFR 101.1323 - Spectrum aggregation, disaggregation, and partitioning.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false Spectrum aggregation, disaggregation, and... Requirements § 101.1323 Spectrum aggregation, disaggregation, and partitioning. (a) Eligibility. (1) Parties... aggregate spectrum in any MAS bands, but may not disaggregate their licensed spectrum or partition their...
47 CFR 90.365 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Partitioned licenses and disaggregated spectrum... § 90.365 Partitioned licenses and disaggregated spectrum. (a) Eligibility. (1) Party seeking approval... disaggregate their licensed spectrum at any time following the grant of their licenses. Multilateration LMS...
47 CFR 90.365 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Partitioned licenses and disaggregated spectrum... § 90.365 Partitioned licenses and disaggregated spectrum. (a) Eligibility. (1) Party seeking approval... disaggregate their licensed spectrum at any time following the grant of their licenses. Multilateration LMS...
47 CFR 101.1323 - Spectrum aggregation, disaggregation, and partitioning.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Spectrum aggregation, disaggregation, and... Requirements § 101.1323 Spectrum aggregation, disaggregation, and partitioning. (a) Eligibility. (1) Parties... aggregate spectrum in any MAS bands, but may not disaggregate their licensed spectrum or partition their...
47 CFR 101.1323 - Spectrum aggregation, disaggregation, and partitioning.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Spectrum aggregation, disaggregation, and... Requirements § 101.1323 Spectrum aggregation, disaggregation, and partitioning. (a) Eligibility. (1) Parties... aggregate spectrum in any MAS bands, but may not disaggregate their licensed spectrum or partition their...
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2017-07-01
Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.
DOT National Transportation Integrated Search
2009-02-01
This working paper describes a group of techniques for disaggregating origin-destination tables : for truck forecasting that makes explicit use of observed traffic on a network. Six models within : the group are presented, each of which uses nonlinea...
NASA Astrophysics Data System (ADS)
Bindhu, V. M.; Narasimhan, B.
2015-03-01
Normalized Difference Vegetation Index (NDVI), a key parameter in understanding the vegetation dynamics, has high spatial and temporal variability. However, continuous monitoring of NDVI is not feasible at fine spatial resolution (<60 m) owing to the long revisit time needed by the satellites to acquire the fine spatial resolution data. Further, the study attains significance in the case of humid tropical regions of the earth, where the prevailing atmospheric conditions restrict availability of fine resolution cloud free images at a high temporal frequency. As an alternative to the lack of high resolution images, the current study demonstrates a novel disaggregation method (DisNDVI) which integrates the spatial information from a single fine resolution image and temporal information in terms of crop phenology from time series of coarse resolution images to generate estimates of NDVI at fine spatial and temporal resolution. The phenological variation of the pixels captured at the coarser scale provides the basis for relating the temporal variability of the pixel with the NDVI available at fine resolution. The proposed methodology was tested over a 30 km × 25 km spatially heterogeneous study area located in the south of Tamil Nadu, India. The robustness of the algorithm was assessed by an independent comparison of the disaggregated NDVI and observed NDVI obtained from concurrent Landsat ETM+ imagery. The results showed good spatial agreement across the study area dominated with agriculture and forest pixels, with a root mean square error of 0.05. The validation done at the coarser scale showed that disaggregated NDVI spatially averaged to 240 m compared well with concurrent MODIS NDVI at 240 m (R2 > 0.8). The validation results demonstrate the effectiveness of DisNDVI in improving the spatial and temporal resolution of NDVI images for utility in fine scale hydrological applications such as crop growth monitoring and estimation of evapotranspiration.
Command Disaggregation Attack and Mitigation in Industrial Internet of Things
Zhu, Pei-Dong; Hu, Yi-Fan; Cui, Peng-Shuai; Zhang, Yan
2017-01-01
A cyber-physical attack in the industrial Internet of Things can cause severe damage to physical system. In this paper, we focus on the command disaggregation attack, wherein attackers modify disaggregated commands by intruding command aggregators like programmable logic controllers, and then maliciously manipulate the physical process. It is necessary to investigate these attacks, analyze their impact on the physical process, and seek effective detection mechanisms. We depict two different types of command disaggregation attack modes: (1) the command sequence is disordered and (2) disaggregated sub-commands are allocated to wrong actuators. We describe three attack models to implement these modes with going undetected by existing detection methods. A novel and effective framework is provided to detect command disaggregation attacks. The framework utilizes the correlations among two-tier command sequences, including commands from the output of central controller and sub-commands from the input of actuators, to detect attacks before disruptions occur. We have designed components of the framework and explain how to mine and use these correlations to detect attacks. We present two case studies to validate different levels of impact from various attack models and the effectiveness of the detection framework. Finally, we discuss how to enhance the detection framework. PMID:29065461
Command Disaggregation Attack and Mitigation in Industrial Internet of Things.
Xun, Peng; Zhu, Pei-Dong; Hu, Yi-Fan; Cui, Peng-Shuai; Zhang, Yan
2017-10-21
A cyber-physical attack in the industrial Internet of Things can cause severe damage to physical system. In this paper, we focus on the command disaggregation attack, wherein attackers modify disaggregated commands by intruding command aggregators like programmable logic controllers, and then maliciously manipulate the physical process. It is necessary to investigate these attacks, analyze their impact on the physical process, and seek effective detection mechanisms. We depict two different types of command disaggregation attack modes: (1) the command sequence is disordered and (2) disaggregated sub-commands are allocated to wrong actuators. We describe three attack models to implement these modes with going undetected by existing detection methods. A novel and effective framework is provided to detect command disaggregation attacks. The framework utilizes the correlations among two-tier command sequences, including commands from the output of central controller and sub-commands from the input of actuators, to detect attacks before disruptions occur. We have designed components of the framework and explain how to mine and use these correlations to detect attacks. We present two case studies to validate different levels of impact from various attack models and the effectiveness of the detection framework. Finally, we discuss how to enhance the detection framework.
A statistical approach for isolating fossil fuel emissions in atmospheric inverse problems
Yadav, Vineet; Michalak, Anna M.; Ray, Jaideep; ...
2016-10-27
We study independent verification and quantification of fossil fuel (FF) emissions that constitutes a considerable scientific challenge. By coupling atmospheric observations of CO 2 with models of atmospheric transport, inverse models offer the possibility of overcoming this challenge. However, disaggregating the biospheric and FF flux components of terrestrial fluxes from CO 2 concentration measurements has proven to be difficult, due to observational and modeling limitations. In this study, we propose a statistical inverse modeling scheme for disaggregating winter time fluxes on the basis of their unique error covariances and covariates, where these covariances and covariates are representative of the underlyingmore » processes affecting FF and biospheric fluxes. The application of the method is demonstrated with one synthetic and two real data prototypical inversions by using in situ CO 2 measurements over North America. Also, inversions are performed only for the month of January, as predominance of biospheric CO 2 signal relative to FF CO 2 signal and observational limitations preclude disaggregation of the fluxes in other months. The quality of disaggregation is assessed primarily through examination of a posteriori covariance between disaggregated FF and biospheric fluxes at regional scales. Findings indicate that the proposed method is able to robustly disaggregate fluxes regionally at monthly temporal resolution with a posteriori cross covariance lower than 0.15 µmol m -2 s -1 between FF and biospheric fluxes. Error covariance models and covariates based on temporally varying FF inventory data provide a more robust disaggregation over static proxies (e.g., nightlight intensity and population density). However, the synthetic data case study shows that disaggregation is possible even in absence of detailed temporally varying FF inventory data.« less
NASA Astrophysics Data System (ADS)
Safeeq, Mohammad; Fares, Ali
2011-12-01
Daily and sub-daily weather data are often required for hydrological and environmental modeling. Various weather generator programs have been used to generate synthetic climate data where observed climate data are limited. In this study, a weather data generator, ClimGen, was evaluated for generating information on daily precipitation, temperature, and wind speed at four tropical watersheds located in Hawai`i, USA. We also evaluated different daily to sub-daily weather data disaggregation methods for precipitation, air temperature, dew point temperature, and wind speed at Mākaha watershed. The hydrologic significance values of the different disaggregation methods were evaluated using Distributed Hydrology Soil Vegetation Model. MuDRain and diurnal method performed well over uniform distribution in disaggregating daily precipitation. However, the diurnal method is more consistent if accurate estimates of hourly precipitation intensities are desired. All of the air temperature disaggregation methods performed reasonably well, but goodness-of-fit statistics were slightly better for sine curve model with 2 h lag. Cosine model performed better than random model in disaggregating daily wind speed. The largest differences in annual water balance were related to wind speed followed by precipitation and dew point temperature. Simulated hourly streamflow, evapotranspiration, and groundwater recharge were less sensitive to the method of disaggregating daily air temperature. ClimGen performed well in generating the minimum and maximum temperature and wind speed. However, for precipitation, it clearly underestimated the number of extreme rainfall events with an intensity of >100 mm/day in all four locations. ClimGen was unable to replicate the distribution of observed precipitation at three locations (Honolulu, Kahului, and Hilo). ClimGen was able to reproduce the distributions of observed minimum temperature at Kahului and wind speed at Kahului and Hilo. Although the weather data generation and disaggregation methods were concentrated in a few Hawaiian watersheds, the results presented can be used to similar mountainous location settings, as well as any specific locations aimed at furthering the site-specific performance evaluation of these tested models.
Kelly, Jack; Knottenbelt, William
2015-01-01
Many countries are rolling out smart electricity meters. These measure a home's total power demand. However, research into consumer behaviour suggests that consumers are best able to improve their energy efficiency when provided with itemised, appliance-by-appliance consumption information. Energy disaggregation is a computational technique for estimating appliance-by-appliance energy consumption from a whole-house meter signal. To conduct research on disaggregation algorithms, researchers require data describing not just the aggregate demand per building but also the 'ground truth' demand of individual appliances. In this context, we present UK-DALE: an open-access dataset from the UK recording Domestic Appliance-Level Electricity at a sample rate of 16 kHz for the whole-house and at 1/6 Hz for individual appliances. This is the first open access UK dataset at this temporal resolution. We recorded from five houses, one of which was recorded for 655 days, the longest duration we are aware of for any energy dataset at this sample rate. We also describe the low-cost, open-source, wireless system we built for collecting our dataset.
NASA Astrophysics Data System (ADS)
Ajami, H.; Sharma, A.
2016-12-01
A computationally efficient, semi-distributed hydrologic modeling framework is developed to simulate water balance at a catchment scale. The Soil Moisture and Runoff simulation Toolkit (SMART) is based upon the delineation of contiguous and topologically connected Hydrologic Response Units (HRUs). In SMART, HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are distributed cross sections or equivalent cross sections (ECS) delineated in first order sub-basins. ECSs are formulated by aggregating topographic and physiographic properties of the part or entire first order sub-basins to further reduce computational time in SMART. Previous investigations using SMART have shown that temporal dynamics of soil moisture are well captured at a HRU level using the ECS delineation approach. However, spatial variability of soil moisture within a given HRU is ignored. Here, we examined a number of disaggregation schemes for soil moisture distribution in each HRU. The disaggregation schemes are either based on topographic based indices or a covariance matrix obtained from distributed soil moisture simulations. To assess the performance of the disaggregation schemes, soil moisture simulations from an integrated land surface-groundwater model, ParFlow.CLM in Baldry sub-catchment, Australia are used. ParFlow is a variably saturated sub-surface flow model that is coupled to the Common Land Model (CLM). Our results illustrate that the statistical disaggregation scheme performs better than the methods based on topographic data in approximating soil moisture distribution at a 60m scale. Moreover, the statistical disaggregation scheme maintains temporal correlation of simulated daily soil moisture while preserves the mean sub-basin soil moisture. Future work is focused on assessing the performance of this scheme in catchments with various topographic and climate settings.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-04
... information to help State educational agencies (SEAs), local educational agencies (LEAs), schools, and... student population. SEAs, LEAs, schools, and IHEs might then use those data to improve their ability to... seeking information on disaggregation practices that SEAs, LEAs, schools, and IHEs use when collecting and...
47 CFR 22.948 - Partitioning and Disaggregation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Partitioning and Disaggregation. 22.948 Section 22.948 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES PUBLIC MOBILE SERVICES Cellular Radiotelephone Service § 22.948 Partitioning and Disaggregation. (a) Eligibility...
47 CFR 101.1111 - Partitioning and disaggregation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES FIXED MICROWAVE SERVICES Competitive Bidding Procedures for LMDS § 101.1111 Partitioning and disaggregation. (a) Definitions. Disaggregation. The assignment of discrete portions or “blocks” of spectrum licensed to a geographic licensee or qualifying entity. Partitioning. The assignment of geographic portions...
47 CFR 22.513 - Partitioning and disaggregation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false Partitioning and disaggregation. 22.513 Section 22.513 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES PUBLIC MOBILE SERVICES Paging and Radiotelephone Service § 22.513 Partitioning and disaggregation. MEA and EA...
47 CFR 22.513 - Partitioning and disaggregation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 2 2014-10-01 2014-10-01 false Partitioning and disaggregation. 22.513 Section 22.513 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES PUBLIC MOBILE SERVICES Paging and Radiotelephone Service § 22.513 Partitioning and disaggregation. MEA and EA...
47 CFR 22.513 - Partitioning and disaggregation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 2 2013-10-01 2013-10-01 false Partitioning and disaggregation. 22.513 Section 22.513 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES PUBLIC MOBILE SERVICES Paging and Radiotelephone Service § 22.513 Partitioning and disaggregation. MEA and EA...
47 CFR 22.513 - Partitioning and disaggregation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Partitioning and disaggregation. 22.513 Section 22.513 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES PUBLIC MOBILE SERVICES Paging and Radiotelephone Service § 22.513 Partitioning and disaggregation. MEA and EA...
A comparative analysis of two highly spatially resolved European atmospheric emission inventories
NASA Astrophysics Data System (ADS)
Ferreira, J.; Guevara, M.; Baldasano, J. M.; Tchepel, O.; Schaap, M.; Miranda, A. I.; Borrego, C.
2013-08-01
A reliable emissions inventory is highly important for air quality modelling applications, especially at regional or local scales, which require high resolutions. Consequently, higher resolution emission inventories have been developed that are suitable for regional air quality modelling. This research performs an inter-comparative analysis of different spatial disaggregation methodologies of atmospheric emission inventories. This study is based on two different European emission inventories with different spatial resolutions: 1) the EMEP (European Monitoring and Evaluation Programme) inventory and 2) an emission inventory developed by the TNO (Netherlands Organisation for Applied Scientific Research). These two emission inventories were converted into three distinct gridded emission datasets as follows: (i) the EMEP emission inventory was disaggregated by area (EMEParea) and (ii) following a more complex methodology (HERMES-DIS - High-Elective Resolution Modelling Emissions System - DISaggregation module) to understand and evaluate the influence of different disaggregation methods; and (iii) the TNO gridded emissions, which are based on different emission data sources and different disaggregation methods. A predefined common grid with a spatial resolution of 12 × 12 km2 was used to compare the three datasets spatially. The inter-comparative analysis was performed by source sector (SNAP - Selected Nomenclature for Air Pollution) with emission totals for selected pollutants. It included the computation of difference maps (to focus on the spatial variability of emission differences) and a linear regression analysis to calculate the coefficients of determination and to quantitatively measure differences. From the spatial analysis, greater differences were found for residential/commercial combustion (SNAP02), solvent use (SNAP06) and road transport (SNAP07). These findings were related to the different spatial disaggregation that was conducted by the TNO and HERMES-DIS for the first two sectors and to the distinct data sources that were used by the TNO and HERMES-DIS for road transport. Regarding the regression analysis, the greatest correlation occurred between the EMEParea and HERMES-DIS because the latter is derived from the first, which does not occur for the TNO emissions. The greatest correlations were encountered for agriculture NH3 emissions, due to the common use of the CORINE Land Cover database for disaggregation. The point source emissions (energy industries, industrial processes, industrial combustion and extraction/distribution of fossil fuels) resulted in the lowest coefficients of determination. The spatial variability of SOx differed among the emissions that were obtained from the different disaggregation methods. In conclusion, HERMES-DIS and TNO are two distinct emission inventories, both very well discretized and detailed, suitable for air quality modelling. However, the different databases and distinct disaggregation methodologies that were used certainly result in different spatial emission patterns. This fact should be considered when applying regional atmospheric chemical transport models. Future work will focus on the evaluation of air quality models performance and sensitivity to these spatial discrepancies in emission inventories. Air quality modelling will benefit from the availability of appropriate resolution, consistent and reliable emission inventories.
Cascade rainfall disaggregation application in U.S. Central Plains
USDA-ARS?s Scientific Manuscript database
Hourly rainfall are increasingly used in complex, process-based simulations of the environment. Long records of daily rainfall are common, but long continuous records of hourly rainfall are rare and must be developed. A Multiplicative Random Cascade (MRC) model is proposed to disaggregate observed d...
47 CFR 95.823 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Geographic partitioning and spectrum... Geographic partitioning and spectrum disaggregation. (a) Eligibility. Parties seeking Commission approval of geographic partitioning or spectrum disaggregation of 218-219 MHz Service system licenses shall request an...
47 CFR 95.823 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Geographic partitioning and spectrum... Geographic partitioning and spectrum disaggregation. (a) Eligibility. Parties seeking Commission approval of geographic partitioning or spectrum disaggregation of 218-219 MHz Service system licenses shall request an...
47 CFR 95.823 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false Geographic partitioning and spectrum... Geographic partitioning and spectrum disaggregation. (a) Eligibility. Parties seeking Commission approval of geographic partitioning or spectrum disaggregation of 218-219 MHz Service system licenses shall request an...
47 CFR 95.823 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Geographic partitioning and spectrum... Geographic partitioning and spectrum disaggregation. (a) Eligibility. Parties seeking Commission approval of geographic partitioning or spectrum disaggregation of 218-219 MHz Service system licenses shall request an...
47 CFR 95.823 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Geographic partitioning and spectrum... Geographic partitioning and spectrum disaggregation. (a) Eligibility. Parties seeking Commission approval of geographic partitioning or spectrum disaggregation of 218-219 MHz Service system licenses shall request an...
47 CFR 90.911 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false Partitioned licenses and disaggregated spectrum. 90.911 Section 90.911 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND... Specialized Mobile Radio Service § 90.911 Partitioned licenses and disaggregated spectrum. (a) Eligibility...
47 CFR 90.911 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Partitioned licenses and disaggregated spectrum. 90.911 Section 90.911 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND... Specialized Mobile Radio Service § 90.911 Partitioned licenses and disaggregated spectrum. (a) Eligibility...
47 CFR 90.911 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Partitioned licenses and disaggregated spectrum. 90.911 Section 90.911 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND... Specialized Mobile Radio Service § 90.911 Partitioned licenses and disaggregated spectrum. (a) Eligibility...
47 CFR 90.911 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Partitioned licenses and disaggregated spectrum. 90.911 Section 90.911 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND... Specialized Mobile Radio Service § 90.911 Partitioned licenses and disaggregated spectrum. (a) Eligibility...
47 CFR 90.911 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Partitioned licenses and disaggregated spectrum. 90.911 Section 90.911 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND... Specialized Mobile Radio Service § 90.911 Partitioned licenses and disaggregated spectrum. (a) Eligibility...
Disaggregating from daily to sub-daily rainfall under a future climate
NASA Astrophysics Data System (ADS)
Westra, S.; Evans, J.; Mehrotra, R.; Sharma, A.
2012-04-01
We describe an algorithm for disaggregating daily rainfall into sub-daily rainfall 'fragments' (continuous fine-resolution rainfall sequences whose total depth sums to the daily rainfall amount) under a future, warmer climate. The basis of the algorithm is re-sample sub-daily fragments from the historical record conditional on the total daily rainfall amount and a range of atmospheric predictors representative of the future climate. The logic is that as the atmosphere warms, future rainfall patterns will be more reflective of historical rainfall patterns which occurred on warmer days at the same location, or at locations which have an atmospheric profile more reflective of expected future conditions. When looking at the scaling from daily to sub-daily rainfall over the historical record, it was found that the relationship varied significantly by season and by location, with rainfall patterns on warmer seasons or at warmer locations typically showing more intense rain falling over shorter periods compared with cooler seasons and stations. Importantly, by regressing against atmospheric covariates such as temperature this effect was almost entirely eliminated, providing a basis for suggesting the approach may be valid when extrapolating sub-daily sequences to a future climate. The method of fragments algorithm was then applied to nine stations around Australia, and showed that when holding the total daily rainfall constant, the maximum intensity of a short duration (6 minute) rainfall increased by between 4.1% and 13.4% per degree change in temperature for the maximum six minute burst, between 3.1% and 6.8% for the maximum one hour burst, and between 1.5% and 3.5% for the fraction of the day with no rainfall. This highlights that a large proportion of the change to the distribution of precipitation in the future is likely to occur at sub-daily timescales, with significant implications for many hydrological systems.
Disaggregating Assessment to Close the Loop and Improve Student Learning
ERIC Educational Resources Information Center
Rawls, Janita; Hammons, Stacy
2015-01-01
This study examined student learning outcomes for accelerated degree students as compared to conventional undergraduate students, disaggregated by class levels, to develop strategies for then closing the loop with assessment. Using the National Survey of Student Engagement, critical thinking and oral and written communication outcomes were…
Effects of slaking and mechanical breakdown on disaggregation and splash erosion
USDA-ARS?s Scientific Manuscript database
The contributions of different aggregate breakdown mechanisms to splash erosion are still obscure. This study was designed to investigate the effects of different soil disaggregation mechanisms on splash erosion. Loam clay soil, clay loam soil, and sandy loam soil were used in this study. Soil aggre...
DOT National Transportation Integrated Search
1999-12-01
This paper analyzes the freight demand characteristics that drive modal choice by means of a large scale, national, disaggregate revealed preference database for shippers in France in 1988, using a nested logit. Particular attention is given to priva...
47 CFR 27.15 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 2 2012-10-01 2012-10-01 false Geographic partitioning and spectrum... Geographic partitioning and spectrum disaggregation. (a) Eligibility. (1) Parties seeking approval for... service area or disaggregate their licensed spectrum at any time following the grant of their licenses. (b...
47 CFR 101.56 - Partitioned service areas (PSAs) and disaggregated spectrum.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Partitioned service areas (PSAs) and disaggregated spectrum. 101.56 Section 101.56 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED..., Modifications, Conditions and Forfeitures § 101.56 Partitioned service areas (PSAs) and disaggregated spectrum...
47 CFR 90.813 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2010 CFR
2010-10-01
... defined by coordinate points at every 3 degrees along the partitioned service area unless an FCC... disaggregation. (c) Installment payments—(1) Apportioning the balance on installment payment plans. When a... partitions its licensed area or disaggregates spectrum to another party, the outstanding balance owed by the...
47 CFR 27.805 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 2 2012-10-01 2012-10-01 false Geographic partitioning and spectrum... partitioning and spectrum disaggregation. An entity that acquires a portion of a 1.4 GHz band licensee's geographic area or spectrum subject to a geographic partitioning or spectrum disaggregation agreement under...
47 CFR 27.904 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 2 2012-10-01 2012-10-01 false Geographic partitioning and spectrum... partitioning and spectrum disaggregation. An entity that acquires a portion of a 1670-1675 MHz band licensee's geographic area or spectrum subject to a geographic partitioning or spectrum disaggregation agreement under...
47 CFR 27.904 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Geographic partitioning and spectrum... partitioning and spectrum disaggregation. An entity that acquires a portion of a 1670-1675 MHz band licensee's geographic area or spectrum subject to a geographic partitioning or spectrum disaggregation agreement under...
47 CFR 27.805 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 2 2013-10-01 2013-10-01 false Geographic partitioning and spectrum... partitioning and spectrum disaggregation. An entity that acquires a portion of a 1.4 GHz band licensee's geographic area or spectrum subject to a geographic partitioning or spectrum disaggregation agreement under...
47 CFR 27.805 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 2 2014-10-01 2014-10-01 false Geographic partitioning and spectrum... partitioning and spectrum disaggregation. An entity that acquires a portion of a 1.4 GHz band licensee's geographic area or spectrum subject to a geographic partitioning or spectrum disaggregation agreement under...
47 CFR 27.805 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false Geographic partitioning and spectrum... partitioning and spectrum disaggregation. An entity that acquires a portion of a 1.4 GHz band licensee's geographic area or spectrum subject to a geographic partitioning or spectrum disaggregation agreement under...
47 CFR 27.904 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 2 2013-10-01 2013-10-01 false Geographic partitioning and spectrum... partitioning and spectrum disaggregation. An entity that acquires a portion of a 1670-1675 MHz band licensee's geographic area or spectrum subject to a geographic partitioning or spectrum disaggregation agreement under...
47 CFR 27.904 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 2 2014-10-01 2014-10-01 false Geographic partitioning and spectrum... partitioning and spectrum disaggregation. An entity that acquires a portion of a 1670-1675 MHz band licensee's geographic area or spectrum subject to a geographic partitioning or spectrum disaggregation agreement under...
47 CFR 27.904 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false Geographic partitioning and spectrum... partitioning and spectrum disaggregation. An entity that acquires a portion of a 1670-1675 MHz band licensee's geographic area or spectrum subject to a geographic partitioning or spectrum disaggregation agreement under...
47 CFR 27.805 - Geographic partitioning and spectrum disaggregation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Geographic partitioning and spectrum... partitioning and spectrum disaggregation. An entity that acquires a portion of a 1.4 GHz band licensee's geographic area or spectrum subject to a geographic partitioning or spectrum disaggregation agreement under...
Kelly, Jack; Knottenbelt, William
2015-01-01
Many countries are rolling out smart electricity meters. These measure a home’s total power demand. However, research into consumer behaviour suggests that consumers are best able to improve their energy efficiency when provided with itemised, appliance-by-appliance consumption information. Energy disaggregation is a computational technique for estimating appliance-by-appliance energy consumption from a whole-house meter signal. To conduct research on disaggregation algorithms, researchers require data describing not just the aggregate demand per building but also the ‘ground truth’ demand of individual appliances. In this context, we present UK-DALE: an open-access dataset from the UK recording Domestic Appliance-Level Electricity at a sample rate of 16 kHz for the whole-house and at 1/6 Hz for individual appliances. This is the first open access UK dataset at this temporal resolution. We recorded from five houses, one of which was recorded for 655 days, the longest duration we are aware of for any energy dataset at this sample rate. We also describe the low-cost, open-source, wireless system we built for collecting our dataset. PMID:25984347
NASA Astrophysics Data System (ADS)
Kelly, Jack; Knottenbelt, William
2015-03-01
Many countries are rolling out smart electricity meters. These measure a home’s total power demand. However, research into consumer behaviour suggests that consumers are best able to improve their energy efficiency when provided with itemised, appliance-by-appliance consumption information. Energy disaggregation is a computational technique for estimating appliance-by-appliance energy consumption from a whole-house meter signal. To conduct research on disaggregation algorithms, researchers require data describing not just the aggregate demand per building but also the ‘ground truth’ demand of individual appliances. In this context, we present UK-DALE: an open-access dataset from the UK recording Domestic Appliance-Level Electricity at a sample rate of 16 kHz for the whole-house and at 1/6 Hz for individual appliances. This is the first open access UK dataset at this temporal resolution. We recorded from five houses, one of which was recorded for 655 days, the longest duration we are aware of for any energy dataset at this sample rate. We also describe the low-cost, open-source, wireless system we built for collecting our dataset.
47 CFR 24.714 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 2 2013-10-01 2013-10-01 false Partitioned licenses and disaggregated spectrum... Partitioned licenses and disaggregated spectrum. (a) Eligibility. (1) Parties seeking approval for... § 24.839. (2) Broadband PCS licensees in spectrum blocks A, B, D, and E and broadband PCS C and F block...
47 CFR 24.714 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Partitioned licenses and disaggregated spectrum... Partitioned licenses and disaggregated spectrum. (a) Eligibility. (1) Parties seeking approval for... § 24.839. (2) Broadband PCS licensees in spectrum blocks A, B, D, and E and broadband PCS C and F block...
47 CFR 24.714 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 2 2014-10-01 2014-10-01 false Partitioned licenses and disaggregated spectrum... Partitioned licenses and disaggregated spectrum. (a) Eligibility. (1) Parties seeking approval for... § 24.839. (2) Broadband PCS licensees in spectrum blocks A, B, D, and E and broadband PCS C and F block...
47 CFR 24.714 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 2 2012-10-01 2012-10-01 false Partitioned licenses and disaggregated spectrum... Partitioned licenses and disaggregated spectrum. (a) Eligibility. (1) Parties seeking approval for... § 24.839. (2) Broadband PCS licensees in spectrum blocks A, B, D, and E and broadband PCS C and F block...
47 CFR 24.714 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false Partitioned licenses and disaggregated spectrum... Partitioned licenses and disaggregated spectrum. (a) Eligibility. (1) Parties seeking approval for... § 24.839. (2) Broadband PCS licensees in spectrum blocks A, B, D, and E and broadband PCS C and F block...
NASA Astrophysics Data System (ADS)
Chakrabarti, S.; Judge, J.; Bindlish, R.; Bongiovanni, T.; Jackson, T. J.
2016-12-01
The NASA Soil Moisture Active Passive (SMAP) mission provides global observations of brightness temperatures (TB) at 36km. For these observations to be relevant to studies in agricultural regions, the TB values need to be downscaled to finer resolutions. In this study, a machine learning algorithm is introduced for downscaling of TB from 36km to 9km. The algorithm uses image segmentation to cluster the study region based on meteorological and land cover similarity, followed by a support vector machine based regression that computes the value of the disaggregated TB at all pixels. High resolution remote sensing products such as land surface temperature, normalized difference vegetation index, enhanced vegetation index, precipitation, soil texture, and land-cover were used for downscaling. The algorithm was implemented in Iowa, United States, during the growing season from April to July 2015 when the SMAP L3-SM_AP TB product at 9 km was available for comparison. In addition, the downscaled estimates from the algorithm are compared with 9km TB obtained by resampling SMAP L1B_TB product at 36km. It was found that the downscaled TB were very similar to the SMAP-L3_SM _AP TB product, even for vegetated areas with a mean difference ≤ 5K. However, the standard deviation of the downscaled was lower by 7K than that of the AP product. The probability density functions of the downscaled TB were similar to the SMAP- TB. The results indicate that these downscaling algorithms may be used for downscaling TB using complex non-linear correlations on a grid without using active microwave observations.
NASA Astrophysics Data System (ADS)
Wang, Liping; Ji, Yusheng; Liu, Fuqiang
The integration of multihop relays with orthogonal frequency-division multiple access (OFDMA) cellular infrastructures can meet the growing demands for better coverage and higher throughput. Resource allocation in the OFDMA two-hop relay system is more complex than that in the conventional single-hop OFDMA system. With time division between transmissions from the base station (BS) and those from relay stations (RSs), fixed partitioning of the BS subframe and RS subframes can not adapt to various traffic demands. Moreover, single-hop scheduling algorithms can not be used directly in the two-hop system. Therefore, we propose a semi-distributed algorithm called ASP to adjust the length of every subframe adaptively, and suggest two ways to extend single-hop scheduling algorithms into multihop scenarios: link-based and end-to-end approaches. Simulation results indicate that the ASP algorithm increases system utilization and fairness. The max carrier-to-interference ratio (Max C/I) and proportional fairness (PF) scheduling algorithms extended using the end-to-end approach obtain higher throughput than those using the link-based approach, but at the expense of more overhead for information exchange between the BS and RSs. The resource allocation scheme using ASP and end-to-end PF scheduling achieves a tradeoff between system throughput maximization and fairness.
A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case
Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038
A high-performance genetic algorithm: using traveling salesman problem as a case.
Tsai, Chun-Wei; Tseng, Shih-Pang; Chiang, Ming-Chao; Yang, Chu-Sing; Hong, Tzung-Pei
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA.
Characterization and disaggregation of daily rainfall in the Upper Blue Nile Basin in Ethiopia
NASA Astrophysics Data System (ADS)
Engida, Agizew N.; Esteves, Michel
2011-03-01
SummaryIn Ethiopia, available rainfall records are mainly limited to daily time steps. Though rainfall data at shorter time steps are important for various purposes like modeling of erosion processes and flood hydrographs, they are hardly available in Ethiopia. The objectives of this study were (i) to study the temporal characteristics of daily rains at two stations in the region of the Upper Blue Nile Basin (UBNB) and (ii) to calibrate and evaluate a daily rainfall disaggregation model. The analysis was based on rainfall data of Bahir Dar and Gonder Meteorological Stations. The disaggregation model used was the Modified Bartlett-Lewis Rectangular Pulse Model (MBLRPM). The mean daily rainfall intensity varied from about 4 mm in the dry season to 17 mm in the wet season with corresponding variation in raindays of 0.4-26 days. The observed maximum daily rainfall varied from 13 mm in the dry month to 200 mm in the wet month. The average wet/dry spell length varied from 1/21 days in the dry season to 6/1 days in the rainy season. Most of the rainfall occurs in the afternoon and evening periods of the day. Daily rainfall disaggregation using the MBLRPM alone resulted in poor match between the disaggregated and observed hourly rainfalls. Stochastic redistribution of the outputs of the model using Beta probability distribution function improved the agreement between observed and calculated hourly rain intensities. In areas where convective rainfall is dominant, the outputs of MBLRPM should be redistributed using relevant probability distributions to simulate the diurnal rainfall pattern.
An Efficient Next Hop Selection Algorithm for Multi-Hop Body Area Networks
Ayatollahitafti, Vahid; Ngadi, Md Asri; Mohamad Sharif, Johan bin; Abdullahi, Mohammed
2016-01-01
Body Area Networks (BANs) consist of various sensors which gather patient’s vital signs and deliver them to doctors. One of the most significant challenges faced, is the design of an energy-efficient next hop selection algorithm to satisfy Quality of Service (QoS) requirements for different healthcare applications. In this paper, a novel efficient next hop selection algorithm is proposed in multi-hop BANs. This algorithm uses the minimum hop count and a link cost function jointly in each node to choose the best next hop node. The link cost function includes the residual energy, free buffer size, and the link reliability of the neighboring nodes, which is used to balance the energy consumption and to satisfy QoS requirements in terms of end to end delay and reliability. Extensive simulation experiments were performed to evaluate the efficiency of the proposed algorithm using the NS-2 simulator. Simulation results show that our proposed algorithm provides significant improvement in terms of energy consumption, number of packets forwarded, end to end delay and packet delivery ratio compared to the existing routing protocol. PMID:26771586
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
47 CFR 80.60 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Partitioned licenses and disaggregated spectrum... licenses and disaggregated spectrum. (a) Except as specified in § 20.15(c) of this chapter with respect to... spectrum pursuant to the procedures set forth in this section. (2) AMTS geographic area licensees, see § 80...
Operational Plasticity Enables Hsp104 to Disaggregate Diverse Amyloid and Non-Amyloid Clients
DeSantis, Morgan E.; Leung, Eunice H.; Sweeny, Elizabeth A.; Jackrel, Meredith E.; Cushman-Nick, Mimi; Neuhaus-Follini, Alexandra; Vashist, Shilpa; Sochor, Matthew A.; Knight, M. Noelle; Shorter, James
2012-01-01
Summary It is not understood how Hsp104, a hexameric AAA+ ATPase from yeast, disaggregates diverse structures including stress-induced aggregates, prions, and α-synuclein conformers connected to Parkinson disease. Here, we establish that Hsp104 hexamers adapt different mechanisms of intersubunit collaboration to disaggregate stress-induced aggregates versus amyloid. To resolve disordered aggregates, Hsp104 subunits collaborate non-co-operatively via probabilistic substrate binding and ATP hydrolysis. To disaggregate amyloid, several subunits co-operatively engage substrate and hydrolyze ATP. Importantly, Hsp104 variants with impaired intersubunit communication dissolve disordered aggregates but not amyloid. Unexpectedly, prokaryotic ClpB subunits collaborate differently than Hsp104 and couple probabilistic substrate binding to cooperative ATP hydrolysis, which enhances disordered aggregate dissolution but sensitizes ClpB to inhibition and diminishes amyloid disaggregation. Finally, we establish that Hsp104 hexamers deploy more subunits to disaggregate Sup35 prion strains with more stable ‘cross-β’ cores. Thus, operational plasticity enables Hsp104 to robustly dissolve amyloid and non-amyloid clients, which impose distinct mechanical demands. PMID:23141537
An economic analysis of disaggregation of space assets: Application to GPS
NASA Astrophysics Data System (ADS)
Hastings, Daniel E.; La Tour, Paul A.
2017-05-01
New ideas, technologies and architectural concepts are emerging with the potential to reshape the space enterprise. One of those new architectural concepts is the idea that rather than aggregating payloads onto large very high performance buses, space architectures should be disaggregated with smaller numbers of payloads (as small as one) per bus and the space capabilities spread across a correspondingly larger number of systems. The primary rationale is increased survivability and resilience. The concept of disaggregation is examined from an acquisition cost perspective. A mixed system dynamics and trade space exploration model is developed to look at long-term trends in the space acquisition business. The model is used to examine the question of how different disaggregated GPS architectures compare in cost to the well-known current GPS architecture. A generation-over-generation examination of policy choices is made possible through the application of soft systems modeling of experience and learning effects. The assumptions that are allowed to vary are: design lives, production quantities, non-recurring engineering and time between generations. The model shows that there is always a premium in the first generation to be paid to disaggregate the GPS payloads. However, it is possible to construct survivable architectures where the premium after two generations is relatively low.
NASA Astrophysics Data System (ADS)
Parra, Gustavo G.; Ferreira, Lucimara P.; Gonçalves, Pablo J.; Sizova, Svetlana V.; Oleinikov, Vladimir A.; Morozov, Vladimir N.; Kuzmin, Vladimir A.; Borissevitch, Iouri E.
2018-02-01
Interaction between porphyrins and quantum dots (QD) via energy and/or charge transfer is usually accompanied by reduction of the QD luminescence intensity and lifetime. However, for CdSe/ZnS-Cys QD water solutions, kept at 276 K during 3 months (aged QD), the significant increase in the luminescence intensity at the addition of meso-tetrakis (p-sulfonato-phenyl) porphyrin (TPPS4) has been observed in this study. Aggregation of QD during the storage provokes reduction in the quantum yield and lifetime of their luminescence. Using steady-state and time-resolved fluorescence techniques, we demonstrated that TPPS4 stimulated disaggregation of aged CdSe/ZnS-Cys QD in aqueous solutions, increasing the quantum yield of their luminescence, which finally reached that of the fresh-prepared QD. Disaggregation takes place due to increase in electrostatic repulsion between QD at their binding with negatively charged porphyrin molecules. Binding of just four porphyrin molecules per single QD was sufficient for total QD disaggregation. The analysis of QD luminescence decay curves demonstrated that disaggregation stronger affected the luminescence related with the electron-hole annihilation in the QD shell. The obtained results demonstrate the way to repair aged QD by adding of some molecules or ions to the solutions, stimulating QD disaggregation and restoring their luminescence characteristics, which could be important for QD biomedical applications, such as bioimaging and fluorescence diagnostics. On the other hand, the disaggregation is important for QD applications in biology and medicine since it reduces the size of the particles facilitating their internalization into living cells across the cell membrane.
Pathway Towards Fluency: Using 'disaggregate instruction' to promote science literacy
NASA Astrophysics Data System (ADS)
Brown, Bryan A.; Ryoo, Kihyun; Rodriguez, Jamie
2010-07-01
This study examines the impact of Disaggregate Instruction on students' science learning. Disaggregate Instruction is the idea that science teaching and learning can be separated into conceptual and discursive components. Using randomly assigned experimental and control groups, 49 fifth-grade students received web-based science lessons on photosynthesis using our experimental approach. We supplemented quantitative statistical comparisons of students' performance on pre- and post-test questions (multiple choice and short answer) with a qualitative analysis of students' post-test interviews. The results revealed that students in the experimental group outscored their control group counterparts across all measures. In addition, students taught using the experimental method demonstrated an improved ability to write using scientific language as well as an improved ability to provide oral explanations using scientific language. This study has important implications for how science educators can prepare teachers to teach diverse student populations.
NASA Astrophysics Data System (ADS)
Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris
2018-01-01
Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.
Automated Detection of Craters in Martian Satellite Imagery Using Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Norman, C. J.; Paxman, J.; Benedix, G. K.; Tan, T.; Bland, P. A.; Towner, M.
2018-04-01
Crater counting is used in determining surface age of planets. We propose improvements to martian Crater Detection Algorithms by implementing an end-to-end detection approach with the possibility of scaling the algorithm planet-wide.
ERIC Educational Resources Information Center
Museus, Samuel D.; Truong, Kimberly A.
2009-01-01
This article highlights the utility of disaggregating qualitative research and assessment data on Asian American college students. Given the complexity of and diversity within the Asian American population, scholars have begun to underscore the importance of disaggregating data in the empirical examination of Asian Americans, but most of those…
Overview of building energy use and report of analyses - 1985: buildings and community systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnader, M.; Lamontagne, J.
1985-10-01
The US Department of Energy (DOE) Office of Buildings and Community Systems (BCS) encourages increased efficiency of energy use in the buildings sector through the conduct of a comprehensive research program, the transfer of research results to industry, and the implementation of DOE's statutory responsibilities in the buildings area. This report summarizes the results of data development and analytical activities undertaken on behalf of BCS during 1985. It provides historical data on energy consumption patterns, prices, and building characteristics used in BCS's planning processes, documents BCS's detailed projections of energy use by end use and building type (the Disaggregate Projection),more » and compares this forecast to other forecasts. Summaries of selected recent BCS analyses are also provided.« less
NASA Technical Reports Server (NTRS)
Wang, Weile; Nemani, Ramakrishna R.; Michaelis, Andrew; Hashimoto, Hirofumi; Dungan, Jennifer L.; Thrasher, Bridget L.; Dixon, Keith W.
2016-01-01
The NASA Earth Exchange Global Daily Downscaled Projections (NEX-GDDP) dataset is comprised of downscaled climate projections that are derived from 21 General Circulation Model (GCM) runs conducted under the Coupled Model Intercomparison Project Phase 5 (CMIP5) and across two of the four greenhouse gas emissions scenarios (RCP4.5 and RCP8.5). Each of the climate projections includes daily maximum temperature, minimum temperature, and precipitation for the periods from 1950 through 2100 and the spatial resolution is 0.25 degrees (approximately 25 km x 25 km). The GDDP dataset has received warm welcome from the science community in conducting studies of climate change impacts at local to regional scales, but a comprehensive evaluation of its uncertainties is still missing. In this study, we apply the Perfect Model Experiment framework (Dixon et al. 2016) to quantify the key sources of uncertainties from the observational baseline dataset, the downscaling algorithm, and some intrinsic assumptions (e.g., the stationary assumption) inherent to the statistical downscaling techniques. We developed a set of metrics to evaluate downscaling errors resulted from bias-correction ("quantile-mapping"), spatial disaggregation, as well as the temporal-spatial non-stationarity of climate variability. Our results highlight the spatial disaggregation (or interpolation) errors, which dominate the overall uncertainties of the GDDP dataset, especially over heterogeneous and complex terrains (e.g., mountains and coastal area). In comparison, the temporal errors in the GDDP dataset tend to be more constrained. Our results also indicate that the downscaled daily precipitation also has relatively larger uncertainties than the temperature fields, reflecting the rather stochastic nature of precipitation in space. Therefore, our results provide insights in improving statistical downscaling algorithms and products in the future.
Autonomous subpixel satellite track end point determination for space-based images.
Simms, Lance M
2011-08-01
An algorithm for determining satellite track end points with subpixel resolution in spaced-based images is presented. The algorithm allows for significant curvature in the imaged track due to rotation of the spacecraft capturing the image. The motivation behind the subpixel end point determination is first presented, followed by a description of the methodology used. Results from running the algorithm on real ground-based and simulated spaced-based images are shown to highlight its effectiveness.
NASA Astrophysics Data System (ADS)
Subuddhi, Usharani; Vuram, Prasanna K.; Chadha, Anju; Mishra, Ashok K.
2014-07-01
A reversal in solvatochromic behaviour was observed in second and third generation glycerol based dansylated polyether dendrons in water on addition of a second solvent like methanol or acetonitrile. Below a certain percentage of the nonaqueous solvent there is a negative-solvatochromism observed and above that there is a switch to positive-solvatochromism. The negative-solvatochromism is attributed to the progressive disaggregation of the dendron aggregates by the nonaqueous solvent component. Once the disaggregation process is complete, positive-solvatochromism is exhibited by the dendron monomers. Higher the hydrophobicity of the dendron more is the amount of the second solvent required for disaggregation.
Automatic detection of end-diastolic and end-systolic frames in 2D echocardiography.
Zolgharni, Massoud; Negoita, Madalina; Dhutia, Niti M; Mielewczik, Michael; Manoharan, Karikaran; Sohaib, S M Afzal; Finegold, Judith A; Sacchi, Stefania; Cole, Graham D; Francis, Darrel P
2017-07-01
Correctly selecting the end-diastolic and end-systolic frames on a 2D echocardiogram is important and challenging, for both human experts and automated algorithms. Manual selection is time-consuming and subject to uncertainty, and may affect the results obtained, especially for advanced measurements such as myocardial strain. We developed and evaluated algorithms which can automatically extract global and regional cardiac velocity, and identify end-diastolic and end-systolic frames. We acquired apical four-chamber 2D echocardiographic video recordings, each at least 10 heartbeats long, acquired twice at frame rates of 52 and 79 frames/s from 19 patients, yielding 38 recordings. Five experienced echocardiographers independently marked end-systolic and end-diastolic frames for the first 10 heartbeats of each recording. The automated algorithm also did this. Using the average of time points identified by five human operators as the reference gold standard, the individual operators had a root mean square difference from that gold standard of 46.5 ms. The algorithm had a root mean square difference from the human gold standard of 40.5 ms (P<.0001). Put another way, the algorithm-identified time point was an outlier in 122/564 heartbeats (21.6%), whereas the average human operator was an outlier in 254/564 heartbeats (45%). An automated algorithm can identify the end-systolic and end-diastolic frames with performance indistinguishable from that of human experts. This saves staff time, which could therefore be invested in assessing more beats, and reduces uncertainty about the reliability of the choice of frame. © 2017, Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Hernandez-Gonzalez, L. A.; Jimenez Pizarro, R.; Néstor Y. Rojas, N. Y.
2011-12-01
As a result of rapid urbanization during the last 60 years, 75% of the Colombian population now lives in cities. Urban areas are net sources of greenhouse gases (GHG) and contribute significantly to national GHG emission inventories. The development of scientifically-sound GHG mitigation strategies require accurate GHG source and sink estimations. Disaggregated inventories are effective mitigation decision-making tools. The disaggregation process renders detailed information on the distribution of emissions by transport mode, and the resulting a priori emissions map allows for optimal definition of sites for GHG flux monitoring, either by eddy covariance or inverse modeling techniques. Fossil fuel use in transportation is a major source of carbon dioxide (CO2) in Bogota. We present estimates of CO2 emissions from road traffic in Bogota using the Intergovernmental Panel on Climate Change (IPCC) reference method, and a spatial and temporal disaggregation method. Aggregated CO2 emissions from mobile sources were estimated from monthly and annual fossil fuel (gasoline, diesel and compressed natural gas - CNG) consumption statistics, and estimations of bio-ethanol and bio-diesel use. Although bio-fuel CO2 emissions are considered balanced over annual (or multi-annual) agricultural cycles, we included them since CO2 generated by their combustion would be measurable by a net flux monitoring system. For the disaggregation methodology, we used information on Bogota's road network classification, mean travel speed and trip length for each vehicle category and road type. The CO2 emission factors were taken from recent in-road measurements for gasoline- and CNG-powered vehicles and also estimated from COPERT IV. We estimated emission factors for diesel from surveys on average trip length and fuel consumption. Using IPCC's reference method, we estimate Bogota's total transport-related CO2 emissions for 2008 (reference year) at 4.8 Tg CO2. The disaggregation method estimation is 16% lower, mainly due to uncertainty in activity factors. With only 4% of Bogota's fleet, diesel use accounts for 42% of the CO2 emissions. The emissions are almost evenly shared between public (9% of the fleet) and private transport. Peak emissions occur at 8 a.m. and 6 p.m. with maximum values over a densely industrialized area at the northwest of Bogota. This investigation allowed estimating the relative contribution of fuel and vehicle categories to spatially- and temporally-resolved CO2 emissions. Fuel consumption time series indicate a near-stabilization trend on energy consumption for transportation, which is unexpected taking into account the sustained economic and vehicle fleet growth in Bogota. The comparison of the disaggregation methodology with the IPCC methodology contributes to the analysis of possible error sources on activity factor estimations. This information is very useful for uncertainty estimation and adjustment of primary air pollutant emissions inventories.
Equity in health care financing in Palestine: the value-added of the disaggregate approach.
Abu-Zaineh, Mohammad; Mataria, Awad; Luchini, Stéphane; Moatti, Jean-Paul
2008-06-01
This paper analyzes the redistributive effect and progressivity associated with the current health care financing schemes in the Occupied Palestinian Territory, using data from the first Palestinian Household Health Expenditure Survey conducted in 2004. The paper goes beyond the commonly used "aggregate summary index approach" to apply a more detailed "disaggregate approach". Such an approach is borrowed from the general economic literature on taxation, and examines redistributive and vertical effects over specific parts of the income distribution, using the dominance criterion. In addition, the paper employs a bootstrap method to test for the statistical significance of the inequality measures. While both the aggregate and disaggregate approaches confirm the pro-rich and regressive character of out-of-pocket payments, the aggregate approach does not ascertain the potential progressive feature of any of the available insurance schemes. The disaggregate approach, however, significantly reveals a progressive aspect, for over half of the population, of the government health insurance scheme, and demonstrates that the regressivity of the out-of-pocket payments is most pronounced among the worst-off classes of the population. Recommendations are advanced to improve the performance of the government insurance schemes to enhance its capacity in limiting inequalities in health care financing in the Occupied Palestinian Territory.
Wu, Jidong; Li, Ying; Li, Ning; Shi, Peijun
2018-01-01
The extent of economic losses due to a natural hazard and disaster depends largely on the spatial distribution of asset values in relation to the hazard intensity distribution within the affected area. Given that statistical data on asset value are collected by administrative units in China, generating spatially explicit asset exposure maps remains a key challenge for rapid postdisaster economic loss assessment. The goal of this study is to introduce a top-down (or downscaling) approach to disaggregate administrative-unit level asset value to grid-cell level. To do so, finding the highly correlated "surrogate" indicators is the key. A combination of three data sets-nighttime light grid, LandScan population grid, and road density grid, is used as ancillary asset density distribution information for spatializing the asset value. As a result, a high spatial resolution asset value map of China for 2015 is generated. The spatial data set contains aggregated economic value at risk at 30 arc-second spatial resolution. Accuracy of the spatial disaggregation reflects redistribution errors introduced by the disaggregation process as well as errors from the original ancillary data sets. The overall accuracy of the results proves to be promising. The example of using the developed disaggregated asset value map in exposure assessment of watersheds demonstrates that the data set offers immense analytical flexibility for overlay analysis according to the hazard extent. This product will help current efforts to analyze spatial characteristics of exposure and to uncover the contributions of both physical and social drivers of natural hazard and disaster across space and time. © 2017 Society for Risk Analysis.
T-wave end detection using neural networks and Support Vector Machines.
Suárez-León, Alexander Alexeis; Varon, Carolina; Willems, Rik; Van Huffel, Sabine; Vázquez-Seisdedos, Carlos Román
2018-05-01
In this paper we propose a new approach for detecting the end of the T-wave in the electrocardiogram (ECG) using Neural Networks and Support Vector Machines. Both, Multilayer Perceptron (MLP) neural networks and Fixed-Size Least-Squares Support Vector Machines (FS-LSSVM) were used as regression algorithms to determine the end of the T-wave. Different strategies for selecting the training set such as random selection, k-means, robust clustering and maximum quadratic (Rényi) entropy were evaluated. Individual parameters were tuned for each method during training and the results are given for the evaluation set. A comparison between MLP and FS-LSSVM approaches was performed. Finally, a fair comparison of the FS-LSSVM method with other state-of-the-art algorithms for detecting the end of the T-wave was included. The experimental results show that FS-LSSVM approaches are more suitable as regression algorithms than MLP neural networks. Despite the small training sets used, the FS-LSSVM methods outperformed the state-of-the-art techniques. FS-LSSVM can be successfully used as a T-wave end detection algorithm in ECG even with small training set sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hydro-meteorological evaluation of downscaled global ensemble rainfall forecasts
NASA Astrophysics Data System (ADS)
Gaborit, Étienne; Anctil, François; Fortin, Vincent; Pelletier, Geneviève
2013-04-01
Ensemble rainfall forecasts are of high interest for decision making, as they provide an explicit and dynamic assessment of the uncertainty in the forecast (Ruiz et al. 2009). However, for hydrological forecasting, their low resolution currently limits their use to large watersheds (Maraun et al. 2010). In order to bridge this gap, various implementations of the statistic-stochastic multi-fractal downscaling technique presented by Perica and Foufoula-Georgiou (1996) were compared, bringing Environment Canada's global ensemble rainfall forecasts from a 100 by 70-km resolution down to 6 by 4-km, while increasing each pixel's rainfall variance and preserving its original mean. For comparison purposes, simpler methods were also implemented such as the bi-linear interpolation, which disaggregates global forecasts without modifying their variance. The downscaled meteorological products were evaluated using different scores and diagrams, from both a meteorological and a hydrological view points. The meteorological evaluation was conducted comparing the forecasted rainfall depths against nine days of observed values taken from Québec City rain gauge database. These 9 days present strong precipitation events occurring during the summer of 2009. For the hydrologic evaluation, the hydrological models SWMM5 and (a modified version of) GR4J were implemented on a small 6 km2 urban catchment located in the Québec City region. Ensemble hydrologic forecasts with a time step of 3 hours were then performed over a 3-months period of the summer of 2010 using the original and downscaled ensemble rainfall forecasts. The most important conclusions of this work are that the overall quality of the forecasts was preserved during the disaggregation procedure and that the disaggregated products using this variance-enhancing method were of similar quality than bi-linear interpolation products. However, variance and dispersion of the different members were, of course, much improved for the variance-enhanced products, compared to the bi-linear interpolation, which is a decisive advantage. The disaggregation technique of Perica and Foufoula-Georgiou (1996) hence represents an interesting way of bridging the gap between the meteorological models' resolution and the high degree of spatial precision sometimes required by hydrological models in their precipitation representation. References Maraun, D., Wetterhall, F., Ireson, A. M., Chandler, R. E., Kendon, E. J., Widmann, M., Brienen, S., Rust, H. W., Sauter, T., Themeßl, M., Venema, V. K. C., Chun, K. P., Goodess, C. M., Jones, R. G., Onof, C., Vrac, M., and Thiele-Eich, I. 2010. Precipitation downscaling under climate change: recent developments to bridge the gap between dynamical models and the end user. Reviews of Geophysics, 48 (3): RG3003, [np]. Doi: 10.1029/2009RG000314. Perica, S., and Foufoula-Georgiou, E. 1996. Model for multiscale disaggregation of spatial rainfall based on coupling meteorological and scaling descriptions. Journal Of Geophysical Research, 101(D21): 26347-26361. Ruiz, J., Saulo, C. and Kalnay, E. 2009. Comparison of Methods Used to Generate Probabilistic Quantitative Precipitation Forecasts over South America. Weather and forecasting, 24: 319-336. DOI: 10.1175/2008WAF2007098.1 This work is distributed under the Creative Commons Attribution 3.0 Unported License together with an author copyright. This license does not conflict with the regulations of the Crown Copyright.
A New Data Set of Educational Attainment in the World, 1950-2010. NBER Working Paper No. 15902
ERIC Educational Resources Information Center
Barro, Robert J.; Lee, Jong-Wha
2010-01-01
Our panel data set on educational attainment has been updated for 146 countries from 1950 to 2010. The data are disaggregated by sex and by 5-year age intervals. We have improved the accuracy of estimation by using information from consistent census data, disaggregated by age group, along with new estimates of mortality rates and completion rates…
Modeling global Hammond landform regions from 250-m elevation data
Karagulle, Deniz; Frye, Charlie; Sayre, Roger; Breyer, Sean P.; Aniello, Peter; Vaughan, Randy; Wright, Dawn J.
2017-01-01
In 1964, E.H. Hammond proposed criteria for classifying and mapping physiographic regions of the United States. Hammond produced a map entitled “Classes of Land Surface Form in the Forty-Eight States, USA”, which is regarded as a pioneering and rigorous treatment of regional physiography. Several researchers automated Hammond?s model in GIS. However, these were local or regional in application, and resulted in inadequate characterization of tablelands. We used a global 250 m DEM to produce a new characterization of global Hammond landform regions. The improved algorithm we developed for the regional landform modeling: (1) incorporated a profile parameter for the delineation of tablelands; (2) accommodated negative elevation data values; (3) allowed neighborhood analysis window (NAW) size to vary between parameters; (4) more accurately bounded plains regions; and (5) mapped landform regions as opposed to discrete landform features. The new global Hammond landform regions product builds on an existing global Hammond landform features product developed by the U.S. Geological Survey, which, while globally comprehensive, did not include tablelands, used a fixed NAW size, and essentially classified pixels rather than regions. Our algorithm also permits the disaggregation of “mixed” Hammond types (e.g. plains with high mountains) into their component parts.
Evolution of an intricate J-protein network driving protein disaggregation in eukaryotes.
Nillegoda, Nadinath B; Stank, Antonia; Malinverni, Duccio; Alberts, Niels; Szlachcic, Anna; Barducci, Alessandro; De Los Rios, Paolo; Wade, Rebecca C; Bukau, Bernd
2017-05-15
Hsp70 participates in a broad spectrum of protein folding processes extending from nascent chain folding to protein disaggregation. This versatility in function is achieved through a diverse family of J-protein cochaperones that select substrates for Hsp70. Substrate selection is further tuned by transient complexation between different classes of J-proteins, which expands the range of protein aggregates targeted by metazoan Hsp70 for disaggregation. We assessed the prevalence and evolutionary conservation of J-protein complexation and cooperation in disaggregation. We find the emergence of a eukaryote-specific signature for interclass complexation of canonical J-proteins. Consistently, complexes exist in yeast and human cells, but not in bacteria, and correlate with cooperative action in disaggregation in vitro. Signature alterations exclude some J-proteins from networking, which ensures correct J-protein pairing, functional network integrity and J-protein specialization. This fundamental change in J-protein biology during the prokaryote-to-eukaryote transition allows for increased fine-tuning and broadening of Hsp70 function in eukaryotes.
A vision-based end-point control for a two-link flexible manipulator. M.S. Thesis
NASA Technical Reports Server (NTRS)
Obergfell, Klaus
1991-01-01
The measurement and control of the end-effector position of a large two-link flexible manipulator are investigated. The system implementation is described and an initial algorithm for static end-point positioning is discussed. Most existing robots are controlled through independent joint controllers, while the end-effector position is estimated from the joint positions using a kinematic relation. End-point position feedback can be used to compensate for uncertainty and structural deflections. Such feedback is especially important for flexible robots. Computer vision is utilized to obtain end-point position measurements. A look-and-move control structure alleviates the disadvantages of the slow and variable computer vision sampling frequency. This control structure consists of an inner joint-based loop and an outer vision-based loop. A static positioning algorithm was implemented and experimentally verified. This algorithm utilizes the manipulator Jacobian to transform a tip position error to a joint error. The joint error is then used to give a new reference input to the joint controller. The convergence of the algorithm is demonstrated experimentally under payload variation. A Landmark Tracking System (Dickerson, et al 1990) is used for vision-based end-point measurements. This system was modified and tested. A real-time control system was implemented on a PC and interfaced with the vision system and the robot.
Sherwood, Jennifer; Sharp, Alana; Cooper, Bergen; Roose-Snyder, Beirne; Blumenthal, Susan
2017-01-01
Abstract National Strategic Plans (NSPs) for HIV/AIDS are country planning documents that set priorities for programmes and services, including a set of targets to quantify progress toward national and international goals. The inclusion of sex-disaggregated targets and targets to combat gender inequality is important given the high disease burden among young women and adolescent girls in Sub-Saharan Africa, yet no comprehensive gender-focused analysis of NSP targets has been performed. This analysis quantitatively evaluates national HIV targets, included in NSPs from eighteen Sub-Saharan African countries, for sex-disaggregation. Additionally, NSP targets aimed at reducing gender-based inequality in health outcomes are compiled and inductively coded to report common themes. On average, in the eighteen countries included in this analysis, 31% of NSP targets include sex-disaggregation (range 0–92%). Three countries disaggregated a majority (>50%) of their targets by sex. Sex-disaggregation in data reporting was more common for targets related to the early phases of the HIV care continuum: 83% of countries included any sex-disaggregated targets for HIV prevention, 56% for testing and linkage to care, 22% for improving antiretroviral treatment coverage, and 11% for retention in treatment. The most common target to reduce gender inequality was to prevent gender-based violence (present in 50% of countries). Other commonly incorporated target areas related to improving women’s access to family planning, human and legal rights, and decision-making power. The inclusion of sex-disaggregated targets in national planning is vital to ensure that programmes make progress for all population groups. Improving the availability and quality of indicators to measure gender inequality, as well as evaluating programme outcomes by sex, is critical to tracking this progress. This analysis reveals an urgent need to set specific and separate targets for men and women in order to achieve an equitable and effective HIV response and align government planning with international priorities for gender equality. PMID:28973358
ERIC Educational Resources Information Center
Nguyen, Bach Mai Dolly; Nguyen, Mike Hoa; Teranishi, Robert T.; Hune, Shirley
2015-01-01
Utilizing disaggregated data from the Office of the Superintendent of Public Instruction (OSPI) and the Educational Research Data Center (ERDC), this report offers a deeper and more nuanced perspective on the educational realities of Asian Americans and Pacific Islander (AAPI) students and reinforces the need for disaggregated data to unmask the…
An IoT-Based Gamified Approach for Reducing Occupants’ Energy Wastage in Public Buildings
Dimitriou, Nikos; Vasilakis, Kostas; Schoofs, Anthony; Nikiforakis, Manolis; Pursche, Fabian; Deliyski, Nikolay; Taha, Amr; Kotsopoulos, Dimosthenis; Bardaki, Cleopatra; Kotsilitis, Sarantis; Garbi, Anastasia
2018-01-01
Conserving energy amenable to the activities of occupants in public buildings is a particularly challenging objective that includes associating energy consumption to particular individuals and providing them with incentives to alter their behavior. This paper describes a gamification framework that aims to facilitate achieving greater energy conservation in public buildings. The framework leverages IoT-enabled low-cost devices, to improve energy disaggregation mechanisms that provide energy use and—consequently—wastage information at the device, area and end-user level. The identified wastages are concurrently targeted by a gamified application that motivates respective behavioral changes combining team competition, virtual rewards and life simulation. Our solution is being developed iteratively with the end-users’ engagement during the analysis, design, development and validation phases in public buildings located in three different countries: Luxembourg (Musée National d’Histoire et d’Art), Spain (EcoUrbanBuilding, Institut Català d’Energia headquarters, Barcelona) and Greece (General Secretariat of the Municipality of Athens). PMID:29439414
An IoT-Based Gamified Approach for Reducing Occupants' Energy Wastage in Public Buildings.
Papaioannou, Thanasis G; Dimitriou, Nikos; Vasilakis, Kostas; Schoofs, Anthony; Nikiforakis, Manolis; Pursche, Fabian; Deliyski, Nikolay; Taha, Amr; Kotsopoulos, Dimosthenis; Bardaki, Cleopatra; Kotsilitis, Sarantis; Garbi, Anastasia
2018-02-10
Conserving energy amenable to the activities of occupants in public buildings is a particularly challenging objective that includes associating energy consumption to particular individuals and providing them with incentives to alter their behavior. This paper describes a gamification framework that aims to facilitate achieving greater energy conservation in public buildings. The framework leverages IoT-enabled low-cost devices, to improve energy disaggregation mechanisms that provide energy use and-consequently-wastage information at the device, area and end-user level. The identified wastages are concurrently targeted by a gamified application that motivates respective behavioral changes combining team competition, virtual rewards and life simulation. Our solution is being developed iteratively with the end-users' engagement during the analysis, design, development and validation phases in public buildings located in three different countries: Luxembourg (Musée National d'Histoire et d'Art), Spain (EcoUrbanBuilding, Institut Català d'Energia headquarters, Barcelona) and Greece (General Secretariat of the Municipality of Athens).
Energy prediction using spatiotemporal pattern networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhanhong; Liu, Chao; Akintayo, Adedotun
This paper presents a novel data-driven technique based on the spatiotemporal pattern network (STPN) for energy/power prediction for complex dynamical systems. Built on symbolic dynamical filtering, the STPN framework is used to capture not only the individual system characteristics but also the pair-wise causal dependencies among different sub-systems. To quantify causal dependencies, a mutual information based metric is presented and an energy prediction approach is subsequently proposed based on the STPN framework. To validate the proposed scheme, two case studies are presented, one involving wind turbine power prediction (supply side energy) using the Western Wind Integration data set generated bymore » the National Renewable Energy Laboratory (NREL) for identifying spatiotemporal characteristics, and the other, residential electric energy disaggregation (demand side energy) using the Building America 2010 data set from NREL for exploring temporal features. In the energy disaggregation context, convex programming techniques beyond the STPN framework are developed and applied to achieve improved disaggregation performance.« less
An adaptive inverse kinematics algorithm for robot manipulators
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Glass, K.; Seraji, H.
1990-01-01
An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.
47 CFR 27.1333 - Geographic partitioning, spectrum disaggregation, license assignment, and transfer.
Code of Federal Regulations, 2010 CFR
2010-10-01
... COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES MISCELLANEOUS WIRELESS COMMUNICATIONS SERVICES 700 MHz Public/Private Partnership § 27.1333 Geographic partitioning, spectrum disaggregation, license...
47 CFR 27.1333 - Geographic partitioning, spectrum disaggregation, license assignment, and transfer.
Code of Federal Regulations, 2011 CFR
2011-10-01
... COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES MISCELLANEOUS WIRELESS COMMUNICATIONS SERVICES 700 MHz Public/Private Partnership § 27.1333 Geographic partitioning, spectrum disaggregation, license...
47 CFR 27.1333 - Geographic partitioning, spectrum disaggregation, license assignment, and transfer.
Code of Federal Regulations, 2012 CFR
2012-10-01
... COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES MISCELLANEOUS WIRELESS COMMUNICATIONS SERVICES 700 MHz Public/Private Partnership § 27.1333 Geographic partitioning, spectrum disaggregation, license...
The prospects for solar energy use in industry within the United Kingdom
NASA Astrophysics Data System (ADS)
Lewis, C. W.
1980-01-01
An assessment of the potential for solar energy applications within U.K. industry has been made, using a disaggregated breakdown of energy consumption in the eight industrial sectors by fuel and end-use, and taking account of solar collector performance under U.K. climatic conditions. Solar contributions of 35 per cent of process boiler heat up to a temperature of 80 C and 10 per cent in the 80-120 C range are considered feasible, along with 35 per cent of non-industrial water heating. After employing energy conservation techniques currently more cost-effective than solar systems, an additional 3.5 per cent of U.K. primary energy expended in manufacturing industry (excluding iron and steel production) could be contributed by solar. This represents 1 per cent of the U.K. national primary energy demand.
NASA Astrophysics Data System (ADS)
Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas
2017-04-01
The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.
Aeolus End-To-End Simulator and Wind Retrieval Algorithms up to Level 1B
NASA Astrophysics Data System (ADS)
Reitebuch, Oliver; Marksteiner, Uwe; Rompel, Marc; Meringer, Markus; Schmidt, Karsten; Huber, Dorit; Nikolaus, Ines; Dabas, Alain; Marshall, Jonathan; de Bruin, Frank; Kanitz, Thomas; Straume, Anne-Grete
2018-04-01
The first wind lidar in space ALADIN will be deployed on ESÁs Aeolus mission. In order to assess the performance of ALADIN and to optimize the wind retrieval and calibration algorithms an end-to-end simulator was developed. This allows realistic simulations of data downlinked by Aeolus. Together with operational processors this setup is used to assess random and systematic error sources and perform sensitivity studies about the influence of atmospheric and instrument parameters.
Sherwood, Jennifer; Sharp, Alana; Cooper, Bergen; Roose-Snyder, Beirne; Blumenthal, Susan
2017-12-01
National Strategic Plans (NSPs) for HIV/AIDS are country planning documents that set priorities for programmes and services, including a set of targets to quantify progress toward national and international goals. The inclusion of sex-disaggregated targets and targets to combat gender inequality is important given the high disease burden among young women and adolescent girls in Sub-Saharan Africa, yet no comprehensive gender-focused analysis of NSP targets has been performed. This analysis quantitatively evaluates national HIV targets, included in NSPs from eighteen Sub-Saharan African countries, for sex-disaggregation. Additionally, NSP targets aimed at reducing gender-based inequality in health outcomes are compiled and inductively coded to report common themes. On average, in the eighteen countries included in this analysis, 31% of NSP targets include sex-disaggregation (range 0-92%). Three countries disaggregated a majority (>50%) of their targets by sex. Sex-disaggregation in data reporting was more common for targets related to the early phases of the HIV care continuum: 83% of countries included any sex-disaggregated targets for HIV prevention, 56% for testing and linkage to care, 22% for improving antiretroviral treatment coverage, and 11% for retention in treatment. The most common target to reduce gender inequality was to prevent gender-based violence (present in 50% of countries). Other commonly incorporated target areas related to improving women's access to family planning, human and legal rights, and decision-making power. The inclusion of sex-disaggregated targets in national planning is vital to ensure that programmes make progress for all population groups. Improving the availability and quality of indicators to measure gender inequality, as well as evaluating programme outcomes by sex, is critical to tracking this progress. This analysis reveals an urgent need to set specific and separate targets for men and women in order to achieve an equitable and effective HIV response and align government planning with international priorities for gender equality. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.
NASA Astrophysics Data System (ADS)
Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans
The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation values<0.5. Although top-down disaggregation of traffic emissions generally exhibits low accuracy, the accuracy is significantly higher in compact cities and might be further improved by applying a correction factor for the city center. Therefore, the method can be used by local environmental authorities in cities with limited resources and with little knowledge on the pollution situation to get an overview on the spatial distribution of the emissions generated by traffic activities.
Satellite-Scale Snow Water Equivalent Assimilation into a High-Resolution Land Surface Model
NASA Technical Reports Server (NTRS)
De Lannoy, Gabrielle J.M.; Reichle, Rolf H.; Houser, Paul R.; Arsenault, Kristi R.; Verhoest, Niko E.C.; Paulwels, Valentijn R.N.
2009-01-01
An ensemble Kalman filter (EnKF) is used in a suite of synthetic experiments to assimilate coarse-scale (25 km) snow water equivalent (SWE) observations (typical of satellite retrievals) into fine-scale (1 km) model simulations. Coarse-scale observations are assimilated directly using an observation operator for mapping between the coarse and fine scales or, alternatively, after disaggregation (re-gridding) to the fine-scale model resolution prior to data assimilation. In either case observations are assimilated either simultaneously or independently for each location. Results indicate that assimilating disaggregated fine-scale observations independently (method 1D-F1) is less efficient than assimilating a collection of neighboring disaggregated observations (method 3D-Fm). Direct assimilation of coarse-scale observations is superior to a priori disaggregation. Independent assimilation of individual coarse-scale observations (method 3D-C1) can bring the overall mean analyzed field close to the truth, but does not necessarily improve estimates of the fine-scale structure. There is a clear benefit to simultaneously assimilating multiple coarse-scale observations (method 3D-Cm) even as the entire domain is observed, indicating that underlying spatial error correlations can be exploited to improve SWE estimates. Method 3D-Cm avoids artificial transitions at the coarse observation pixel boundaries and can reduce the RMSE by 60% when compared to the open loop in this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Lynn; Worrell, Ernst; Khrushch, Marta
Disaggregation of sectoral energy use and greenhouse gas emissions trends reveals striking differences between sectors and regions of the world. Understanding key driving forces in the energy end-use sectors provides insights for development of projections of future greenhouse gas emissions. This report examines global and regional historical trends in energy use and carbon emissions in the industrial, buildings, transport, and agriculture sectors, with a more detailed focus on industry and buildings. Activity and economic drivers as well as trends in energy and carbon intensity are evaluated. The authors show that macro-economic indicators, such as GDP, are insufficient for comprehending trendsmore » and driving forces at the sectoral level. These indicators need to be supplemented with sector-specific information for a more complete understanding of future energy use and greenhouse gas emissions.« less
Aggregation and Disaggregation of Senile Plaques in Alzheimer Disease
NASA Astrophysics Data System (ADS)
Cruz, L.; Urbanc, B.; Buldyrev, S. V.; Christie, R.; Gomez-Isla, T.; Havlin, S.; McNamara, M.; Stanley, H. E.; Hyman, B. T.
1997-07-01
We quantitatively analyzed, using laser scanning confocal microscopy, the three-dimensional structure of individual senile plaques in Alzheimer disease. We carried out the quantitative analysis using statistical methods to gain insights about the processes that govern Aβ peptide deposition. Our results show that plaques are complex porous structures with characteristic pore sizes. We interpret plaque morphology in the context of a new dynamical model based on competing aggregation and disaggregation processes in kinetic steady-state equilibrium with an additional diffusion process allowing Aβ deposits to diffuse over the surface of plaques.
The Value of Advanced Smart Metering in the Management of Urban Water Supply Services
NASA Astrophysics Data System (ADS)
Guardiola, J.; Pulido-Velazquez, M.; Giuliani, M.; Castelletti, A.; Cominola, A.; Arregui de la Cruz, F.; Escriva-Bou, A.; Soriano, J.; Pérez, J. J.; Castillo, J.; Barba, J.; González, V.; Rizzoli, A. E.
2016-12-01
This work intends to outline the experience of the implementation and further exploitation of an extensive network of smart meters (SM) in the city of Valencia by Aguas de Valencia, the water utility that offers water supply and sanitation services to the city of Valencia and its metropolitan area. Valencia has become the first large city in Europe fully equipped with a point-to-point fixed network of SM (currently with more than 430,000 units, about 90% of the meters of the city). The shift towards a water supply management system based on SM is a complex process that entails changes and impacts on different management areas of the water supply organization. A new data management and processing platform has been developed and is already proving notable benefits in the operation of the system. For example, a tool allows to automatically issue and manage work orders when abnormalities such as internal leaks (constant consumption) or meter alarms are detected. Another tool has been developed to reduce levels of non-revenue water by continuously balancing supply and demand in district metered areas. Improving leak detection and adjusting pressure levels has significantly increased the efficiency of the water distribution network. Finally, a service of post-meter leak detection has been also implemented. But the SM also contribute to improve demand management. The customers now receive detailed information on their water consumption, valuable for improving household water management and assessing the value of water conservation strategies. SM are also key tools for improving the level of understanding of demand patterns. Users have been categorized into different clusters depending in their consumption patterns characteristics. Within the EU SmartH2O project, a high resolution and frequency monitoring of residential uses has been conducted in a selected sample of households for a precise disaggregation of residential end-uses. The disaggregation of end-uses allows for a better characterization and modelling of residential water demand, and, ultimately, designing efficient user-oriented water management strategies.
Revisiting the Rise of Electronic Nicotine Delivery Systems Using Search Query Surveillance.
Ayers, John W; Althouse, Benjamin M; Allem, Jon-Patrick; Leas, Eric C; Dredze, Mark; Williams, Rebecca S
2016-06-01
Public perceptions of electronic nicotine delivery systems (ENDS) remain poorly understood because surveys are too costly to regularly implement and, when implemented, there are long delays between data collection and dissemination. Search query surveillance has bridged some of these gaps. Herein, ENDS' popularity in the U.S. is reassessed using Google searches. ENDS searches originating in the U.S. from January 2009 through January 2015 were disaggregated by terms focused on e-cigarette (e.g., e-cig) versus vaping (e.g., vapers); their geolocation (e.g., state); the aggregate tobacco control measures corresponding to their geolocation (e.g., clean indoor air laws); and by terms that indicated the searcher's potential interest (e.g., buy e-cigs likely indicates shopping)-all analyzed in 2015. ENDS searches are rapidly increasing in the U.S., with 8,498,000 searches during 2014 alone. Increasingly, searches are shifting from e-cigarette- to vaping-focused terms, especially in coastal states and states where anti-smoking norms are stronger. For example, nationally, e-cigarette searches declined 9% (95% CI=1%, 16%) during 2014 compared with 2013, whereas vaping searches increased 136% (95% CI=97%, 186%), even surpassing e-cigarette searches. Additionally, the percentage of ENDS searches related to shopping (e.g., vape shop) nearly doubled in 2014, whereas searches related to health concerns (e.g., vaping risks) or cessation (e.g., quit smoking with e-cigs) were rare and declined in 2014. ENDS popularity is rapidly growing and evolving. These findings could inform survey questionnaire development for follow-up investigation and immediately guide policy debates about how the public perceives the health risks or cessation benefits of ENDS. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Regional comparisons of on-site solar potential in the residential and industrial sectors
NASA Astrophysics Data System (ADS)
Gatzke, A. E.; Skewes-Cox, A. O.
1980-10-01
Regional and subregional differences in the potential development of decentralized solar technologies are studied. Two sectors of the economy were selected for intensive analysis: the residential and industrial sectors. The sequence of analysis follows the same general steps: (1) selection of appropriate prototypes within each land use sector disaggregated by census region; (2) characterization of the end-use energy demand of each prototype in order to match an appropriate decentralized solar technology to the energy demand; (3) assessment of the energy conservation potential within each prototype limited by land use patterns, technology efficiency, and variation in solar insolation; and (4) evaluation of the regional and subregional differences in the land use implications of decentralized energy supply technologies that result from the combination of energy demand, energy supply potential, and the subsequent addition of increasingly more restrictive policies to increase the percent contribution of on-site solar energy.
Physical Properties of Human Whole Salivary Mucin:A Dynamic Light Scattering Study
NASA Astrophysics Data System (ADS)
Mahajan, Manish; Kumar, Vijay; Saraswat, Mayank; Yadav, Savita; Shukla, N. K.; Singh, T. P.
2008-04-01
Human salivary mucin, a primary mucous membrane coating glycoprotein forms the first line of defense against adverse environments, attributed to the complex formation between mucin subunits and non mucin species. Aim of the study was to emphasize the effect of pH, denaturants (guanidinum hydrochloride, urea) and detergents (CHAPS, TRITON X -100, SDS on human whole salivary mucin. Hydrodynamic size distribution was measured using DLS. It was observed that aggregation was due to increase in hydrophobic interactions, believed to be accomplished by unfolding of the protein core. Whereas, the detergents which solubilize the proteins by decreasing hydrophobicity lead to disaggregation of mucin into smaller fragments. Mucin subjected to tobacco extract and upon subsequent addition of nicotine was found to have a disaggregating effect on it, suggesting nicotine may be one of the factors responsible for the disaggregating effect of tobacco on mucin, an important carcinogenetic mechanism.
Grieve, E; Fenwick, E; Yang, H-C; Lean, M
2013-11-01
Burden of disease studies typically classify individuals with a body mass index (BMI) ≥ 30 kg m(-2) as a single group ('obese') and make comparisons to those with lower BMIs. Here, we review the literature on the additional economic burden associated with severe obesity or classes 3 and 4 obesity (BMI ≥ 40 kg m(-2) ), the fastest growing category of obesity, with the aim of exploring and disaggregating differences in resource use as BMI increases beyond 40 kg m(-2) . We recognize the importance of comparing classes 3 and 4 obesity to less severe obesity (classes 1 and 2) as well as quantifying the single sub-class impacts (classes 3 and 4). Although the latter analysis is the aim of this review, we include results, where found in the literature, for movement between the recognized subclasses and within classes 3 and 4 obesity. Articles presenting data on the economic burden associated with severe obesity were identified from a search of Ovid MEDLINE, EMBASE, EBSCO CINAHL and Cochrane Library databases. Data were extracted on the direct costs, productivity costs and resource use associated with severe obesity along with estimates of the multiplier effects associated with increasing BMI. Fifteen studies were identified, of which four disaggregated resource use for BMI ≥ 40 kg m(-2) . The multiplier effects derived for a variety of different types of costs incurred by the severely obese compared with those of normal weight (18.5 kg m(-2) < BMI < 25 kg m(-2) ) ranged from 1.5 to 3.9 for direct costs, and from 1.7 to 8.0 for productivity costs. There are few published data on the economic burden of obesity disaggregated by BMI ≥ 40 kg m(-2) . By grouping people homogenously above a threshold of BMI 40 kg m(-2) , the multiplier effects for those at the highest end of the spectrum are likely to be underestimated. This will, in turn, impact on the estimates of cost-effectiveness for interventions and policies aimed at the severely obese. © 2013 The Authors. obesity reviews © 2013 International Association for the Study of Obesity.
A Bayesian additive model for understanding public transport usage in special events.
Rodrigues, Filipe; Borysov, Stanislav; Ribeiro, Bernardete; Pereira, Francisco
2016-12-02
Public special events, like sports games, concerts and festivals are well known to create disruptions in transportation systems, often catching the operators by surprise. Although these are usually planned well in advance, their impact is difficult to predict, even when organisers and transportation operators coordinate. The problem highly increases when several events happen concurrently. To solve these problems, costly processes, heavily reliant on manual search and personal experience, are usual practice in large cities like Singapore, London or Tokyo. This paper presents a Bayesian additive model with Gaussian process components that combines smart card records from public transport with context information about events that is continuously mined from the Web. We develop an efficient approximate inference algorithm using expectation propagation, which allows us to predict the total number of public transportation trips to the special event areas, thereby contributing to a more adaptive transportation system. Furthermore, for multiple concurrent event scenarios, the proposed algorithm is able to disaggregate gross trip counts into their most likely components related to specific events and routine behavior. Using real data from Singapore, we show that the presented model outperforms the best baseline model by up to 26% in R2 and also has explanatory power for its individual components.
War and deforestation in Sierra Leone
NASA Astrophysics Data System (ADS)
Burgess, Robin; Miguel, Edward; Stanton, Charlotte
2015-09-01
The impact of armed conflict on the environment is of major public policy importance. We use a geographically disaggregated dataset of civil war violence together with satellite imagery of land cover to test whether war facilitated or prevented forest loss in Sierra Leone. The conflict data set allows us to establish where rebel groups were stationed and where battles and attacks occurred. The satellite data enables to us to monitor the change in forest cover (total, primary, and secondary) in all of Sierra Leone’s 151 chiefdoms, between 1990 (prior to the war) and 2000 (just prior to its end). The results suggest that conflict in Sierra Leone acted as a brake on local deforestation: conflict-ridden areas experienced significantly less forest loss relative to their more conflict-free counterparts.
Preston, Daniel L; Jacobs, Abigail Z; Orlofske, Sarah A; Johnson, Pieter T J
2014-03-01
Most food webs use taxonomic or trophic species as building blocks, thereby collapsing variability in feeding linkages that occurs during the growth and development of individuals. This issue is particularly relevant to integrating parasites into food webs because parasites often undergo extreme ontogenetic niche shifts. Here, we used three versions of a freshwater pond food web with varying levels of node resolution (from taxonomic species to life stages) to examine how complex life cycles and parasites alter web properties, the perceived trophic position of organisms, and the fit of a probabilistic niche model. Consistent with prior studies, parasites increased most measures of web complexity in the taxonomic species web; however, when nodes were disaggregated into life stages, the effects of parasites on several network properties (e.g., connectance and nestedness) were reversed, due in part to the lower trophic generality of parasite life stages relative to free-living life stages. Disaggregation also reduced the trophic level of organisms with either complex or direct life cycles and was particularly useful when including predation on parasites, which can inflate trophic positions when life stages are collapsed. Contrary to predictions, disaggregation decreased network intervality and did not enhance the fit of a probabilistic niche model to the food webs with parasites. Although the most useful level of biological organization in food webs will vary with the questions of interest, our results suggest that disaggregating species-level nodes may refine our perception of how parasites and other complex life cycle organisms influence ecological networks.
NASA Technical Reports Server (NTRS)
Nyangweso, Emmanuel; Bole, Brian
2014-01-01
Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.
Nichols, Michael R; Moss, Melissa A; Reed, Dana Kim; Cratic-McDaniel, Stephanie; Hoh, Jan H; Rosenberry, Terrone L
2005-01-28
The brains of Alzheimer's disease (AD) patients contain large numbers of amyloid plaques that are rich in fibrils composed of 40- and 42-residue amyloid-beta (Abeta) peptides. Several lines of evidence indicate that fibrillar Abeta and especially soluble Abeta aggregates are important in the etiology of AD. Recent reports also stress that amyloid aggregates are polymorphic and that a single polypeptide can fold into multiple amyloid conformations. Here we demonstrate that Abeta-(1-40) can form soluble aggregates with predominant beta-structures that differ in stability and morphology. One class of aggregates involved soluble Abeta protofibrils, prepared by vigorous overnight agitation of monomeric Abeta-(1-40) at low ionic strength. Dilution of these aggregation reactions induced disaggregation to monomers as measured by size exclusion chromatography. Protofibril concentrations monitored by thioflavin T fluorescence decreased in at least two kinetic phases, with initial disaggregation (rate constant approximately 1 h(-1)) followed by a much slower secondary phase. Incubation of the reactions without agitation resulted in less disaggregation at slower rates, indicating that the protofibrils became progressively more stable over time. In fact, protofibrils isolated by size exclusion chromatography were completely stable and gave no disaggregation. A second class of soluble Abeta aggregates was generated rapidly (<10 min) in buffered 2% hexafluoroisopropanol (HFIP). These aggregates showed increased thioflavin T fluorescence and were rich in beta-structure by circular dichroism. Electron microscopy and atomic force microscopy revealed initial globular clusters that progressed over several days to soluble fibrous aggregates. When diluted out of HFIP, these aggregates initially were very unstable and disaggregated completely within 2 min. However, their stability increased as they progressed to fibers. Relative to Abeta protofibrils, the HFIP-induced aggregates seeded elongation by Abeta monomer deposition very poorly. The techniques used to distinguish these two classes of soluble Abeta aggregates may be useful in characterizing Abeta aggregates formed in vivo.
NASA Astrophysics Data System (ADS)
Nurdiyanto, Heri; Rahim, Robbi; Wulan, Nur
2017-12-01
Symmetric type cryptography algorithm is known many weaknesses in encryption process compared with asymmetric type algorithm, symmetric stream cipher are algorithm that works on XOR process between plaintext and key, to improve the security of symmetric stream cipher algorithm done improvisation by using Triple Transposition Key which developed from Transposition Cipher and also use Base64 algorithm for encryption ending process, and from experiment the ciphertext that produced good enough and very random.
Shield, Kristy; Riley, Clyde; Quinn, Michael A; Rice, Gregory E; Ackland, Margaret L; Ahmed, Nuzhat
2007-01-01
Background Ovarian cancer is characterized by a wide-spread intra-abdominal metastases which represents a major clinical hurdle in the prognosis and management of the disease. A significant proportion of ovarian cancer cells in peritoneal ascites exist as multicellular aggregates or spheroids. We hypothesize that these cellular aggregates or spheroids are invasive with the capacity to survive and implant on the peritoneal surface. This study was designed to elucidate early inherent mechanism(s) of spheroid survival, growth and disaggregation required for peritoneal metastases Methods In this study, we determined the growth pattern and adhesive capacity of ovarian cancer cell lines (HEY and OVHS1) grown as spheroids, using the well established liquid overlay technique, and compared them to a normal ovarian cell line (IOSE29) and cancer cells grown as a monolayer. The proteolytic capacity of these spheroids was compared with cells grown as a monolayer using a gelatin zymography assay to analyze secreted MMP-2/9 in conditioned serum-free medium. The disaggregation of cancer cell line spheroids was determined on extracellular matrices (ECM) such as laminin (LM), fibronectin (FN) and collagen (CI) and the expression of α2, α3, αv, α6 and β1 interin was determined by flow cytometric analysis. Neutralizing antibodies against α2, β1 subunits and α2β1 integrin was used to inhibit disaggregation as well as activation of MMPs in spheroids. Results We demonstrate that ovarian cancer cell lines grown as spheroids can sustain growth for 10 days while the normal ovarian cell line failed to grow beyond 2 days. Compared to cells grown as a monolayer, cancer cells grown as spheroids demonstrated no change in adhesion for up to 4 days, while IOSE29 cells had a 2–4-fold loss of adhesion within 2 days. Cancer cell spheroids disaggregated on extracellular matrices (ECM) and demonstrated enhanced expression of secreted pro-MMP2 as well as activated MMP2/MMP9 with no such activation of MMP's observed in monolayer cells. Flow cytometric analysis demonstrated enhanced expression of α2 and diminution of α6 integrin subunits in spheroids versus monolayer cells. No change in the expression of α3, αv and β1 subunits was evident. Conversely, except for αv integrin, a 1.5–7.5-fold decrease in α2, α3, α6 and β1 integrin subunit expression was observed in IOSE29 cells within 2 days. Neutralizing antibodies against α2, β1 subunits and α2β1 integrin inhibited disaggregation as well as activation of MMPs in spheroids. Conclusion Our results suggest that enhanced expression of α2β1 integrin may influence spheroid disaggregation and proteolysis responsible for the peritoneal dissemination of ovarian carcinoma. This may indicate a new therapeutic target for the suppression of the peritoneal metastasis associated with advanced ovarian carcinomas. PMID:17567918
47 CFR 90.365 - Partitioned licenses and disaggregated spectrum.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Partitioned licenses and disaggregated spectrum. 90.365 Section 90.365 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES PRIVATE LAND MOBILE RADIO SERVICES Intelligent Transportation Systems Radio Service...
47 CFR 101.1111 - Partitioning and disaggregation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false Partitioning and disaggregation. 101.1111 Section 101.1111 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Competitive Bidding Procedures for LMDS § 101.1111 Partitioning and...
47 CFR 101.1111 - Partitioning and disaggregation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Partitioning and disaggregation. 101.1111 Section 101.1111 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Competitive Bidding Procedures for LMDS § 101.1111 Partitioning and...
47 CFR 101.1111 - Partitioning and disaggregation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Partitioning and disaggregation. 101.1111 Section 101.1111 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Competitive Bidding Procedures for LMDS § 101.1111 Partitioning and...
47 CFR 101.1111 - Partitioning and disaggregation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Partitioning and disaggregation. 101.1111 Section 101.1111 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Competitive Bidding Procedures for LMDS § 101.1111 Partitioning and...
Primary midgut, salivary gland, and ovary cultures from Boophilus microplus.
Mosqueda, Juan; Cossío-Bayugar, Raquel; Rodríguez, Elba; Falcón, Alfonso; Ramos, Alberto; Figueroa, Julio V; Alvarez, Antonio
2008-12-01
Primary cell cultures from different tick organs are a valuable tool for host parasite research in the study of the protozoan Babesia sp., which infects different organs of the tick. In this work we describe the generation of midgut, salivary gland, and ovary primary cell cultures from dissections of Boophilus microplus. Midguts, salivary glands, and ovaries were dissected from B. microplus ticks on different days after bovine infestation; different enzymatic disaggregating protocols were tested in the presence of proteolytic enzymes, such as trypsin and collagenase type I and II, for tissue disaggregation and primary cell culture generation. The dissected tick organs obtained 18-20 days after bovine infestation showed a major cellular differentiation and were easier to identify by cellular morphology. The enzymatic disaggregation results showed that each tissue required a different proteolytic enzyme for optimal disaggregation; collagenase type I produced the most complete disaggregation for ovaries but not for midgut or salivary glands. Collagenase type II was effective for salivary glands but performed poorly on ovaries and midgets, and typsin was effective for midguts only. The midgut and ovary primary cell cultures were maintained for 4 weeks in optimal conditions after the cells were no longer viable. The salivary gland cell cultures were viable for 8 months.
Crucial HSP70 co–chaperone complex unlocks metazoan protein disaggregation
Nillegoda, Nadinath B.; Kirstein, Janine; Szlachcic, Anna; Berynskyy, Mykhaylo; Stank, Antonia; Stengel, Florian; Arnsburg, Kristin; Gao, Xuechao; Scior, Annika; Aebersold, Ruedi; Guilbride, D. Lys; Wade, Rebecca C.; Morimoto, Richard I.; Mayer, Matthias P.; Bukau, Bernd
2016-01-01
Protein aggregates are the hallmark of stressed and ageing cells, and characterize several pathophysiological states1,2. Healthy metazoan cells effectively eliminate intracellular protein aggregates3,4, indicating that efficient disaggregation and/or degradation mechanisms exist. However, metazoans lack the key heat-shock protein disaggregase HSP100 of non-metazoan HSP70-dependent protein disaggregation systems5,6, and the human HSP70 system alone, even with the crucial HSP110 nucleotide exchange factor, has poor disaggregation activity in vitro4,7. This unresolved conundrum is central to protein quality control biology. Here we show that synergic cooperation between complexed J-protein co-chaperones of classes A and B unleashes highly efficient protein disaggregation activity in human and nematode HSP70 systems. Metazoan mixed-class J-protein complexes are transient, involve complementary charged regions conserved in the J-domains and carboxy-terminal domains of each J-protein class, and are flexible with respect to subunit composition. Complex formation allows J-proteins to initiate transient higher order chaperone structures involving HSP70 and interacting nucleotide exchange factors. A network of cooperative class A and B J-protein interactions therefore provides the metazoan HSP70 machinery with powerful, flexible, and finely regulatable disaggregase activity and a further level of regulation crucial for cellular protein quality control. PMID:26245380
Visualization for Hyper-Heuristics: Back-End Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simon, Luke
Modern society is faced with increasingly complex problems, many of which can be formulated as generate-and-test optimization problems. Yet, general-purpose optimization algorithms may sometimes require too much computational time. In these instances, hyperheuristics may be used. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario, finding the solution significantly faster than its predecessor. However, it may be difficult to understand exactly how a design was derived and why it should be trusted. This project aims to address these issues by creating an easy-to-use graphical user interface (GUI) for hyper-heuristics and an easy-to-understand scientific visualizationmore » for the produced solutions. To support the development of this GUI, my portion of the research involved developing algorithms that would allow for parsing of the data produced by the hyper-heuristics. This data would then be sent to the front-end, where it would be displayed to the end user.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreyra, M; Salinas Aranda, F; Dodat, D
Purpose: To use end-to-end testing to validate a 6 MV high dose rate photon beam, configured for Eclipse AAA algorithm using Golden Beam Data (GBD), for SBRT treatments using RapidArc. Methods: Beam data was configured for Varian Eclipse AAA algorithm using the GBD provided by the vendor. Transverse and diagonals dose profiles, PDDs and output factors down to a field size of 2×2 cm2 were measured on a Varian Trilogy Linac and compared with GBD library using 2% 2mm 1D gamma analysis. The MLC transmission factor and dosimetric leaf gap were determined to characterize the MLC in Eclipse. Mechanical andmore » dosimetric tests were performed combining different gantry rotation speeds, dose rates and leaf speeds to evaluate the delivery system performance according to VMAT accuracy requirements. An end-to-end test was implemented planning several SBRT RapidArc treatments on a CIRS 002LFC IMRT Thorax Phantom. The CT scanner calibration curve was acquired and loaded in Eclipse. PTW 31013 ionization chamber was used with Keithley 35617EBS electrometer for absolute point dose measurements in water and lung equivalent inserts. TPS calculated planar dose distributions were compared to those measured using EPID and MapCheck, as an independent verification method. Results were evaluated with gamma criteria of 2% dose difference and 2mm DTA for 95% of points. Results: GBD set vs. measured data passed 2% 2mm 1D gamma analysis even for small fields. Machine performance tests show results are independent of machine delivery configuration, as expected. Absolute point dosimetry comparison resulted within 4% for the worst case scenario in lung. Over 97% of the points evaluated in dose distributions passed gamma index analysis. Conclusion: Eclipse AAA algorithm configuration of the 6 MV high dose rate photon beam using GBD proved efficient. End-to-end test dose calculation results indicate it can be used clinically for SBRT using RapidArc.« less
Putting the pyramid into action: the Healthy Eating Index and Food Quality Score.
Kennedy, Eileen
2008-01-01
Consumption patterns are changing globally. As a result both researchers and policy makers require simple, easy to use measures of diet quality. The Healthy Eating Index (HEI) was developed as a single, summary measure of diet quality. The original HEI was a ten component index based on the US Dietary Guidelines and the Food Guide Pyramid. Research on the HEI indicates that the index correlates significantly with the RDA's for a range of nutrients and with an individual's self-rating of their diet. The revised HEI provides a more disaggregated version of the original index based on the 2005 Dietary Guidelines for Americans. Within each of the five major food groups, some foods are more nutrient dense than others. Nutrient Density algorithms have been developed to rate foods within food groups. The selection of the most nutrient dense foods within food groups lead to a dietary pattern with a higher HEI. The implications of using the HEI and nutrient density to develop interventions are discussed in this presentation.
47 CFR 101.535 - Geographic partitioning and spectrum aggregation/disaggregation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES 24 GHz Service and Digital Electronic Message Service § 101.535 Geographic partitioning and spectrum aggregation/disaggregation. (a) Eligibility... grant of a license. (2) Any existing frequency coordination agreements shall convey with the assignment...
47 CFR 101.535 - Geographic partitioning and spectrum aggregation/disaggregation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES 24 GHz Service and Digital Electronic Message Service § 101.535 Geographic partitioning and spectrum aggregation/disaggregation. (a) Eligibility... grant of a license. (2) Any existing frequency coordination agreements shall convey with the assignment...
47 CFR 101.535 - Geographic partitioning and spectrum aggregation/disaggregation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES 24 GHz Service and Digital Electronic Message Service § 101.535 Geographic partitioning and spectrum aggregation/disaggregation. (a) Eligibility... grant of a license. (2) Any existing frequency coordination agreements shall convey with the assignment...
47 CFR 101.535 - Geographic partitioning and spectrum aggregation/disaggregation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES 24 GHz Service and Digital Electronic Message Service § 101.535 Geographic partitioning and spectrum aggregation/disaggregation. (a) Eligibility... grant of a license. (2) Any existing frequency coordination agreements shall convey with the assignment...
47 CFR 101.535 - Geographic partitioning and spectrum aggregation/disaggregation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES 24 GHz Service and Digital Electronic Message Service § 101.535 Geographic partitioning and spectrum aggregation/disaggregation. (a) Eligibility... grant of a license. (2) Any existing frequency coordination agreements shall convey with the assignment...
Combination of Rivest-Shamir-Adleman Algorithm and End of File Method for Data Security
NASA Astrophysics Data System (ADS)
Rachmawati, Dian; Amalia, Amalia; Elviwani
2018-03-01
Data security is one of the crucial issues in the delivery of information. One of the ways which used to secure the data is by encoding it into something else that is not comprehensible by human beings by using some crypto graphical techniques. The Rivest-Shamir-Adleman (RSA) cryptographic algorithm has been proven robust to secure messages. Since this algorithm uses two different keys (i.e., public key and private key) at the time of encryption and decryption, it is classified as asymmetric cryptography algorithm. Steganography is a method that is used to secure a message by inserting the bits of the message into a larger media such as an image. One of the known steganography methods is End of File (EoF). In this research, the cipher text resulted from the RSA algorithm is compiled into an array form and appended to the end of the image. The result of the EoF is the image which has a line with black gradations under it. This line contains the secret message. This combination of cryptography and steganography in securing the message is expected to increase the security of the message, since the message encryption technique (RSA) is mixed with the data hiding technique (EoF).
NASA Astrophysics Data System (ADS)
Sorensen, Ira Joseph
A primary objective of the effort reported here is to develop a radiometric instrument modeling environment to provide complete end-to-end numerical models of radiometric instruments, integrating the optical, electro-thermal, and electronic systems. The modeling environment consists of a Monte Carlo ray-trace (MCRT) model of the optical system coupled to a transient, three-dimensional finite-difference electrothermal model of the detector assembly with an analytic model of the signal-conditioning circuitry. The environment provides a complete simulation of the dynamic optical and electrothermal behavior of the instrument. The modeling environment is used to create an end-to-end model of the CERES scanning radiometer, and its performance is compared to the performance of an operational CERES total channel as a benchmark. A further objective of this effort is to formulate an efficient design environment for radiometric instruments. To this end, the modeling environment is then combined with evolutionary search algorithms known as genetic algorithms (GA's) to develop a methodology for optimal instrument design using high-level radiometric instrument models. GA's are applied to the design of the optical system and detector system separately and to both as an aggregate function with positive results.
An efficient group multicast routing for multimedia communication
NASA Astrophysics Data System (ADS)
Wang, Yanlin; Sun, Yugen; Yan, Xinfang
2004-04-01
Group multicasting is a kind of communication mechanism whereby each member of a group sends messages to all the other members of the same group. Group multicast routing algorithms capable of satisfying quality of service (QoS) requirements of multimedia applications are essential for high-speed networks. We present a heuristic algorithm for group multicast routing with end to end delay constraint. Source-specific routing trees for each member are generated in our algorithm, which satisfy member"s bandwidth and end to end delay requirements. Simulations over random network were carried out to compare proposed algorithm performance with Low and Song"s. The experimental results show that our proposed algorithm performs better in terms of network cost and ability in constructing feasible multicast trees for group members. Moreover, our algorithm achieves good performance in balancing traffic, which can avoid link blocking and enhance the network behavior efficiently.
Disaggregation of small, cohesive rubble pile asteroids due to YORP
NASA Astrophysics Data System (ADS)
Scheeres, D. J.
2018-04-01
The implication of small amounts of cohesion within relatively small rubble pile asteroids is investigated with regard to their evolution under the persistent presence of the YORP effect. We find that below a characteristic size, which is a function of cohesive strength, density and other properties, rubble pile asteroids can enter a "disaggregation phase" in which they are subject to repeated fissions after which the formation of a stabilizing binary system is not possible. Once this threshold is passed rubble pile asteroids may be disaggregated into their constituent components within a finite time span. These constituent components will have their own spin limits - albeit potentially at a much higher spin rate due to the greater strength of a monolithic body. The implications of this prediction are discussed and include modification of size distributions, prevalence of monolithic bodies among meteoroids and the lifetime of small rubble pile bodies in the solar system. The theory is then used to place constraints on the strength of binary asteroids characterized as a function of their type.
Dynamic structural states of ClpB involved in its disaggregation function.
Uchihashi, Takayuki; Watanabe, Yo-Hei; Nakazaki, Yosuke; Yamasaki, Takashi; Watanabe, Hiroki; Maruno, Takahiro; Ishii, Kentaro; Uchiyama, Susumu; Song, Chihong; Murata, Kazuyoshi; Iino, Ryota; Ando, Toshio
2018-06-01
The ATP-dependent bacterial protein disaggregation machine, ClpB belonging to the AAA+ superfamily, refolds toxic protein aggregates into the native state in cooperation with the cognate Hsp70 partner. The ring-shaped hexamers of ClpB unfold and thread its protein substrate through the central pore. However, their function-related structural dynamics has remained elusive. Here we directly visualize ClpB using high-speed atomic force microscopy (HS-AFM) to gain a mechanistic insight into its disaggregation function. The HS-AFM movies demonstrate massive conformational changes of the hexameric ring during ATP hydrolysis, from a round ring to a spiral and even to a pair of twisted half-spirals. HS-AFM observations of Walker-motif mutants unveil crucial roles of ATP binding and hydrolysis in the oligomer formation and structural dynamics. Furthermore, repressed and hyperactive mutations result in significantly different oligomeric forms. These results provide a comprehensive view for the ATP-driven oligomeric-state transitions that enable ClpB to disentangle protein aggregates.
Metazoan Hsp70 machines use Hsp110 to power protein disaggregation.
Rampelt, Heike; Kirstein-Miles, Janine; Nillegoda, Nadinath B; Chi, Kang; Scholz, Sebastian R; Morimoto, Richard I; Bukau, Bernd
2012-11-05
Accumulation of aggregation-prone misfolded proteins disrupts normal cellular function and promotes ageing and disease. Bacteria, fungi and plants counteract this by solubilizing and refolding aggregated proteins via a powerful cytosolic ATP-dependent bichaperone system, comprising the AAA+ disaggregase Hsp100 and the Hsp70-Hsp40 system. Metazoa, however, lack Hsp100 disaggregases. We show that instead the Hsp110 member of the Hsp70 superfamily remodels the human Hsp70-Hsp40 system to efficiently disaggregate and refold aggregates of heat and chemically denatured proteins in vitro and in cell extracts. This Hsp110 effect relies on nucleotide exchange, not on ATPase activity, implying ATP-driven chaperoning is not required. Knock-down of nematode Caenorhabditis elegans Hsp110, but not an unrelated nucleotide exchange factor, compromises dissolution of heat-induced protein aggregates and severely shortens lifespan after heat shock. We conclude that in metazoa, Hsp70-Hsp40 powered by Hsp110 nucleotide exchange represents the crucial disaggregation machinery that reestablishes protein homeostasis to counteract protein unfolding stress.
Economic growth, energy consumption and CO2 emissions in India: a disaggregated causal analysis
NASA Astrophysics Data System (ADS)
Nain, Md Zulquar; Ahmad, Wasim; Kamaiah, Bandi
2017-09-01
This study examines the long-run and short-run causal relationships among energy consumption, real gross domestic product (GDP) and CO2 emissions using aggregate and disaggregate (sectoral) energy consumption measures utilising annual data from 1971 to 2011. The autoregressive distributed lag bounds test reveals that there is a long-run relationship among the variables concerned at both aggregate and disaggregate levels. The Toda-Yamamoto causality tests, however, reveal that the long-run as well short-run causal relationship among the variables is not uniform across sectors. The weight of evidences of the study indicates that there is short-run causality from electricity consumption to economic growth, and to CO2 emissions. The results suggest that India should take appropriate cautious steps to sustain high growth rate and at the same time to control emissions of CO2. Further, energy and environmental policies should acknowledge the sectoral differences in the relationship between energy consumption and real gross domestic product.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-07
... Spectrum Disaggregation Rules and Policies for Certain Wireless Radio Services AGENCY: Federal..., geographic partitioning, and spectrum disaggregation for certain Wireless Radio Services in an effort to... Counsel, Mobility Division, Wireless Telecommunications Bureau, at (202) 418- 0920, or e-mail at Richard...
47 CFR 101.1415 - Partitioning and disaggregation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... American Datum (NAD83). (d) Unjust enrichment. 12 GHz licensees that received a bidding credit and... SERVICES FIXED MICROWAVE SERVICES Multichannel Video Distribution and Data Service Rules for the 12.2-12.7 GHz Band § 101.1415 Partitioning and disaggregation. (a) MVDDS licensees are permitted to partition...
47 CFR 101.1415 - Partitioning and disaggregation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... American Datum (NAD83). (d) Unjust enrichment. 12 GHz licensees that received a bidding credit and... SERVICES FIXED MICROWAVE SERVICES Multichannel Video Distribution and Data Service Rules for the 12.2-12.7 GHz Band § 101.1415 Partitioning and disaggregation. (a) MVDDS licensees are permitted to partition...
47 CFR 101.1415 - Partitioning and disaggregation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... American Datum (NAD83). (d) Unjust enrichment. 12 GHz licensees that received a bidding credit and... SERVICES FIXED MICROWAVE SERVICES Multichannel Video Distribution and Data Service Rules for the 12.2-12.7 GHz Band § 101.1415 Partitioning and disaggregation. (a) MVDDS licensees are permitted to partition...
47 CFR 101.1415 - Partitioning and disaggregation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... American Datum (NAD83). (d) Unjust enrichment. 12 GHz licensees that received a bidding credit and... SERVICES FIXED MICROWAVE SERVICES Multichannel Video Distribution and Data Service Rules for the 12.2-12.7 GHz Band § 101.1415 Partitioning and disaggregation. (a) MVDDS licensees are permitted to partition...
34 CFR 200.7 - Disaggregation of data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 1 2011-07-01 2011-07-01 false Disaggregation of data. 200.7 Section 200.7 Education Regulations of the Offices of the Department of Education OFFICE OF ELEMENTARY AND SECONDARY EDUCATION, DEPARTMENT OF EDUCATION TITLE I-IMPROVING THE ACADEMIC ACHIEVEMENT OF THE DISADVANTAGED Improving Basic...
34 CFR 200.7 - Disaggregation of data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 34 Education 1 2012-07-01 2012-07-01 false Disaggregation of data. 200.7 Section 200.7 Education Regulations of the Offices of the Department of Education OFFICE OF ELEMENTARY AND SECONDARY EDUCATION, DEPARTMENT OF EDUCATION TITLE I-IMPROVING THE ACADEMIC ACHIEVEMENT OF THE DISADVANTAGED Improving Basic...
Optimization models for degrouping population data.
Bermúdez, Silvia; Blanquero, Rafael
2016-07-01
In certain countries population data are available in grouped form only, usually as quinquennial age groups plus a large open-ended range for the elderly. However, official statistics call for data by individual age since many statistical operations, such as the calculation of demographic indicators, require the use of ungrouped population data. In this paper a number of mathematical models are proposed which, starting from population data given in age groups, enable these ranges to be degrouped into age-specific population values without leaving a fractional part. Unlike other existing procedures for disaggregating demographic data, ours makes it possible to process several years' data simultaneously in a coherent way, and provides accurate results longitudinally as well as transversally. This procedure is also shown to be helpful in dealing with degrouped population data affected by noise, such as those affected by the age-heaping phenomenon.
Photoinduced Disaggregation of TiO2 Nanoparticles Enables Transdermal Penetration
Bennett, Samuel W.; Zhou, Dongxu; Mielke, Randall; Keller, Arturo A.
2012-01-01
Under many aqueous conditions, metal oxide nanoparticles attract other nanoparticles and grow into fractal aggregates as the result of a balance between electrostatic and Van Der Waals interactions. Although particle coagulation has been studied for over a century, the effect of light on the state of aggregation is not well understood. Since nanoparticle mobility and toxicity have been shown to be a function of aggregate size, and generally increase as size decreases, photo-induced disaggregation may have significant effects. We show that ambient light and other light sources can partially disaggregate nanoparticles from the aggregates and increase the dermal transport of nanoparticles, such that small nanoparticle clusters can readily diffuse into and through the dermal profile, likely via the interstitial spaces. The discovery of photoinduced disaggregation presents a new phenomenon that has not been previously reported or considered in coagulation theory or transdermal toxicological paradigms. Our results show that after just a few minutes of light, the hydrodynamic diameter of TiO2 aggregates is reduced from ∼280 nm to ∼230 nm. We exposed pigskin to the nanoparticle suspension and found 200 mg kg−1 of TiO2 for skin that was exposed to nanoparticles in the presence of natural sunlight and only 75 mg kg−1 for skin exposed to dark conditions, indicating the influence of light on NP penetration. These results suggest that photoinduced disaggregation may have important health implications. PMID:23155401
BrainIACS: a system for web-based medical image processing
NASA Astrophysics Data System (ADS)
Kishore, Bhaskar; Bazin, Pierre-Louis; Pham, Dzung L.
2009-02-01
We describe BrainIACS, a web-based medical image processing system that permits and facilitates algorithm developers to quickly create extensible user interfaces for their algorithms. Designed to address the challenges faced by algorithm developers in providing user-friendly graphical interfaces, BrainIACS is completely implemented using freely available, open-source software. The system, which is based on a client-server architecture, utilizes an AJAX front-end written using the Google Web Toolkit (GWT) and Java Servlets running on Apache Tomcat as its back-end. To enable developers to quickly and simply create user interfaces for configuring their algorithms, the interfaces are described using XML and are parsed by our system to create the corresponding user interface elements. Most of the commonly found elements such as check boxes, drop down lists, input boxes, radio buttons, tab panels and group boxes are supported. Some elements such as the input box support input validation. Changes to the user interface such as addition and deletion of elements are performed by editing the XML file or by using the system's user interface creator. In addition to user interface generation, the system also provides its own interfaces for data transfer, previewing of input and output files, and algorithm queuing. As the system is programmed using Java (and finally Java-script after compilation of the front-end code), it is platform independent with the only requirements being that a Servlet implementation be available and that the processing algorithms can execute on the server platform.
Degree-constrained multicast routing for multimedia communications
NASA Astrophysics Data System (ADS)
Wang, Yanlin; Sun, Yugeng; Li, Guidan
2005-02-01
Multicast services have been increasingly used by many multimedia applications. As one of the key techniques to support multimedia applications, the rational and effective multicast routing algorithms are very important to networks performance. When switch nodes in networks have different multicast capability, multicast routing problem is modeled as the degree-constrained Steiner problem. We presented two heuristic algorithms, named BMSTA and BSPTA, for the degree-constrained case in multimedia communications. Both algorithms are used to generate degree-constrained multicast trees with bandwidth and end to end delay bound. Simulations over random networks were carried out to compare the performance of the two proposed algorithms. Experimental results show that the proposed algorithms have advantages in traffic load balancing, which can avoid link blocking and enhance networks performance efficiently. BMSTA has better ability in finding unsaturated links and (or) unsaturated nodes to generate multicast trees than BSPTA. The performance of BMSTA is affected by the variation of degree constraints.
Western municipal water conservation policy: The case of disaggregated demand
NASA Astrophysics Data System (ADS)
Burness, Stuart; Chermak, Janie; Krause, Kate
2005-03-01
We investigate aspects of the felicity of both incentive-based and command and control policies in effecting municipal water conservation goals. When demand can be disaggregated according to uses or users, our results suggest that policy efforts be focused on the submarket wherein demand is more elastic. Under plausible consumer parameters, a household production function approach to water utilization prescribes the nature of demand elasticities in alternative uses and squares nicely with empirical results from the literature. An empirical example illustrates. Overall, given data and other informational limitations, extant institutional structures, and in situ technology, our analysis suggests a predisposition for command and control policies over incentive-based tools.
A Methodology for the Optimization of Disaggregated Space System Conceptual Designs
2015-06-18
orbit disaggregated space systems. Savings of $82 million are identified for an optimized fire detection system. Savings of $5.7 billion are...solutions and update architecture ................................................................31 Fire detection problem...149 Figure 30 – Example cost vs. weighted mean science return output [37] ...................... 153 Figure 31
System of end-to-end symmetric database encryption
NASA Astrophysics Data System (ADS)
Galushka, V. V.; Aydinyan, A. R.; Tsvetkova, O. L.; Fathi, V. A.; Fathi, D. V.
2018-05-01
The article is devoted to the actual problem of protecting databases from information leakage, which is performed while bypassing access control mechanisms. To solve this problem, it is proposed to use end-to-end data encryption, implemented at the end nodes of an interaction of the information system components using one of the symmetric cryptographic algorithms. For this purpose, a key management method designed for use in a multi-user system based on the distributed key representation model, part of which is stored in the database, and the other part is obtained by converting the user's password, has been developed and described. In this case, the key is calculated immediately before the cryptographic transformations and is not stored in the memory after the completion of these transformations. Algorithms for registering and authorizing a user, as well as changing his password, have been described, and the methods for calculating parts of a key when performing these operations have been provided.
Decomposition of Sources of Errors in Seasonal Streamflow Forecasting over the U.S. Sunbelt
NASA Technical Reports Server (NTRS)
Mazrooei, Amirhossein; Sinah, Tusshar; Sankarasubramanian, A.; Kumar, Sujay V.; Peters-Lidard, Christa D.
2015-01-01
Seasonal streamflow forecasts, contingent on climate information, can be utilized to ensure water supply for multiple uses including municipal demands, hydroelectric power generation, and for planning agricultural operations. However, uncertainties in the streamflow forecasts pose significant challenges in their utilization in real-time operations. In this study, we systematically decompose various sources of errors in developing seasonal streamflow forecasts from two Land Surface Models (LSMs) (Noah3.2 and CLM2), which are forced with downscaled and disaggregated climate forecasts. In particular, the study quantifies the relative contributions of the sources of errors from LSMs, climate forecasts, and downscaling/disaggregation techniques in developing seasonal streamflow forecast. For this purpose, three month ahead seasonal precipitation forecasts from the ECHAM4.5 general circulation model (GCM) were statistically downscaled from 2.8deg to 1/8deg spatial resolution using principal component regression (PCR) and then temporally disaggregated from monthly to daily time step using kernel-nearest neighbor (K-NN) approach. For other climatic forcings, excluding precipitation, we considered the North American Land Data Assimilation System version 2 (NLDAS-2) hourly climatology over the years 1979 to 2010. Then the selected LSMs were forced with precipitation forecasts and NLDAS-2 hourly climatology to develop retrospective seasonal streamflow forecasts over a period of 20 years (1991-2010). Finally, the performance of LSMs in forecasting streamflow under different schemes was analyzed to quantify the relative contribution of various sources of errors in developing seasonal streamflow forecast. Our results indicate that the most dominant source of errors during winter and fall seasons is the errors due to ECHAM4.5 precipitation forecasts, while temporal disaggregation scheme contributes to maximum errors during summer season.
A Dimensionally Aligned Signal Projection for Classification of Unintended Radiated Emissions
Vann, Jason Michael; Karnowski, Thomas P.; Kerekes, Ryan; ...
2017-04-24
Characterization of unintended radiated emissions (URE) from electronic devices plays an important role in many research areas from electromagnetic interference to nonintrusive load monitoring to information system security. URE can provide insights for applications ranging from load disaggregation and energy efficiency to condition-based maintenance of equipment-based upon detected fault conditions. URE characterization often requires subject matter expertise to tailor transforms and feature extractors for the specific electrical devices of interest. We present a novel approach, named dimensionally aligned signal projection (DASP), for projecting aligned signal characteristics that are inherent to the physical implementation of many commercial electronic devices. These projectionsmore » minimize the need for an intimate understanding of the underlying physical circuitry and significantly reduce the number of features required for signal classification. We present three possible DASP algorithms that leverage frequency harmonics, modulation alignments, and frequency peak spacings, along with a two-dimensional image manipulation method for statistical feature extraction. To demonstrate the ability of DASP to generate relevant features from URE, we measured the conducted URE from 14 residential electronic devices using a 2 MS/s collection system. Furthermore, a linear discriminant analysis classifier was trained using DASP generated features and was blind tested resulting in a greater than 90% classification accuracy for each of the DASP algorithms and an accuracy of 99.1% when DASP features are used in combination. Furthermore, we show that a rank reduced feature set of the combined DASP algorithms provides a 98.9% classification accuracy with only three features and outperforms a set of spectral features in terms of general classification as well as applicability across a broad number of devices.« less
A Dimensionally Aligned Signal Projection for Classification of Unintended Radiated Emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vann, Jason Michael; Karnowski, Thomas P.; Kerekes, Ryan
Characterization of unintended radiated emissions (URE) from electronic devices plays an important role in many research areas from electromagnetic interference to nonintrusive load monitoring to information system security. URE can provide insights for applications ranging from load disaggregation and energy efficiency to condition-based maintenance of equipment-based upon detected fault conditions. URE characterization often requires subject matter expertise to tailor transforms and feature extractors for the specific electrical devices of interest. We present a novel approach, named dimensionally aligned signal projection (DASP), for projecting aligned signal characteristics that are inherent to the physical implementation of many commercial electronic devices. These projectionsmore » minimize the need for an intimate understanding of the underlying physical circuitry and significantly reduce the number of features required for signal classification. We present three possible DASP algorithms that leverage frequency harmonics, modulation alignments, and frequency peak spacings, along with a two-dimensional image manipulation method for statistical feature extraction. To demonstrate the ability of DASP to generate relevant features from URE, we measured the conducted URE from 14 residential electronic devices using a 2 MS/s collection system. Furthermore, a linear discriminant analysis classifier was trained using DASP generated features and was blind tested resulting in a greater than 90% classification accuracy for each of the DASP algorithms and an accuracy of 99.1% when DASP features are used in combination. Furthermore, we show that a rank reduced feature set of the combined DASP algorithms provides a 98.9% classification accuracy with only three features and outperforms a set of spectral features in terms of general classification as well as applicability across a broad number of devices.« less
Domínguez-García, P; Pastor, J M; Rubio, M A
2011-04-01
This article presents results on the aggregation and disaggregation kinetics on a 1 μm diameter charged superparamagnetic particles dispersed in water under a constant uniaxial magnetic field in experiments with salt (KCl) added to the suspension in order to observe the behaviour of the system when the electrical properties of the particles have been screened. These particles have an electric charge and are confined between two separated 100 μm thick quartz windows, and sediment near the charged bottom wall. The electrostatic interactions that take place in this experimental setup may affect the micro-structure and colloidal stability of the suspension and thus, the dynamics of aggregation and disaggregation.
Revisiting the Rise of Electronic Nicotine Delivery Systems Using Search Query Surveillance
Ayers, John W.; Althouse, Benjamin M.; Allem, Jon-Patrick; Leas, Eric C.; Dredze, Mark; Williams, Rebecca
2016-01-01
Introduction Public perceptions of electronic nicotine delivery systems (ENDS) remain poorly understood because surveys are too costly to regularly implement and when implemented there are large delays between data collection and dissemination. Search query surveillance has bridged some of these gaps. Herein, ENDS’ popularity in the U.S. is reassessed using Google searches. Methods ENDS searches originating in the U.S. from January 2009 through January 2015 were disaggregated by terms focused on e-cigarette (e.g., e-cig) versus vaping (e.g., vapers), their geolocation (e.g., state), the aggregate tobacco control measures corresponding to their geolocation (e.g., clean indoor air laws), and by terms that indicated the searcher’s potential interest (e.g., buy e-cigs likely indicates shopping); all analyzed in 2015. Results ENDS searches are increasing across the entire U.S., with 8,498,180 searches during 2014. At the same time, searches shifted from e-cigarette- to vaping-focused terms, especially in coastal states and states with more anti-smoking norms. For example, nationally, e-cigarette searches declined 9% (95% CI=1%, 16%) during 2014 compared with 2013, whereas vaping searches increased 136% (95% CI=97%, 186%), surpassing e-cigarette searches. More ENDS searches were related to shopping (e.g., vape shop) than health concerns (e.g., vaping risks) or cessation (e.g., quit smoking with e-cigs), with shopping searches nearly doubling during 2014. Conclusions ENDS popularity is rapidly growing and evolving, and monitoring searches has provided these timely insights. These findings may inform survey questionnaire development for follow-up investigation and immediately guide policy debates about how the public perceives ENDS’ health risks or cessation benefits. PMID:26876772
Disaggregation of silver nanoparticle homoaggregates in a river water matrix.
Metreveli, George; Philippe, Allan; Schaumann, Gabriele E
2015-12-01
Silver nanoparticles (Ag NPs) could be found in aquatic systems in the near future. Although the interplay between aggregate formation and disaggregation is an important factor for mobility, bioavailability and toxicity of Ag NPs in surface waters, the factors controlling disaggregation of Ag NP homoaggregates are still unknown. In this study, we investigated the reversibility of homoaggregation of citrate coated Ag NPs in a Rhine River water matrix. We characterized the disaggregation of Ag NP homoaggregates by ionic strength reduction and addition of Suwannee River humic acid (SRHA) in the presence of strong and weak shear forces. In order to understand the disaggregation processes, we also studied the nature of homoaggregates and their formation dynamics under the influence of SRHA, Ca(2+) concentration and nanoparticle concentration. Even in the presence of SRHA and at low particle concentrations (10 μg L(-1)), aggregates formed rapidly in filtered Rhine water. The critical coagulation concentration (CCC) of Ca(2+) in reconstituted Rhine water was 1.5 mmol L(-1) and was shifted towards higher values in the presence of SRHA. Analysis of the attachment efficiency as a function of Ca(2+) concentration showed that SRHA induces electrosteric stabilization at low Ca(2+) concentrations and cation-bridging flocculation at high Ca(2+) concentrations. Shear forces in the form of mechanical shaking or ultrasound were necessary for breaking the aggregates. Without ultrasound, SRHA also induced disaggregation, but it required several days to reach a stable size of dense aggregates still larger than the primary particles. Citrate stabilized Ag NPs may be in the form of reaction limited aggregates in aquatic systems similar to the Rhine River. The size and the structure of these aggregates will be dynamic and be determined by the solution conditions. Seasonal variations in the chemical composition of natural waters can result in a sedimentation-release cycle of engineered nanoparticles. Copyright © 2014 Elsevier B.V. All rights reserved.
A swash-backwash model of the single epidemic wave
NASA Astrophysics Data System (ADS)
Cliff, Andrew D.; Haggett, Peter
2006-09-01
While there is a large literature on the form of epidemic waves in the time domain, models of their structure and shape in the spatial domain remain poorly developed. This paper concentrates on the changing spatial distribution of an epidemic wave over time and presents a simple method for identifying the leading and trailing edges of the spatial advance and retreat of such waves. Analysis of edge characteristics is used to (a) disaggregate waves into ‘swash’ and ‘backwash’ stages, (b) measure the phase transitions of areas from susceptible, S, through infective, I, to recovered, R, status ( S → I → R) as dimensionless integrals and (c) estimate a spatial version of the basic reproduction number, R 0. The methods used are illustrated by application to measles waves in Iceland over a 60-year period from 1915 to 1974. Extensions of the methods for use with more complex waves are possible through modifying the threshold values used to define the start and end points of an event.
Semantic Repositories for eGovernment Initiatives: Integrating Knowledge and Services
NASA Astrophysics Data System (ADS)
Palmonari, Matteo; Viscusi, Gianluigi
In recent years, public sector investments in eGovernment initiatives have depended on making more reliable existing governmental ICT systems and infrastructures. Furthermore, we assist at a change in the focus of public sector management, from the disaggregation, competition and performance measurements typical of the New Public Management (NPM), to new models of governance, aiming for the reintegration of services under a new perspective in bureaucracy, namely a holistic approach to policy making which exploits the extensive digitalization of administrative operations. In this scenario, major challenges are related to support effective access to information both at the front-end level, by means of highly modular and customizable content provision, and at the back-end level, by means of information integration initiatives. Repositories of information about data and services that exploit semantic models and technologies can support these goals by bridging the gap between the data-level representations and the human-level knowledge involved in accessing information and in searching for services. Moreover, semantic repository technologies can reach a new level of automation for different tasks involved in interoperability programs, both related to data integration techniques and service-oriented computing approaches. In this chapter, we discuss the above topics by referring to techniques and experiences where repositories based on conceptual models and ontologies are used at different levels in eGovernment initiatives: at the back-end level to produce a comprehensive view of the information managed in the public administrations' (PA) information systems, and at the front-end level to support effective service delivery.
Tormene, Paolo; Giorgino, Toni; Quaglini, Silvana; Stefanelli, Mario
2009-01-01
The purpose of this study was to assess the performance of a real-time ("open-end") version of the dynamic time warping (DTW) algorithm for the recognition of motor exercises. Given a possibly incomplete input stream of data and a reference time series, the open-end DTW algorithm computes both the size of the prefix of reference which is best matched by the input, and the dissimilarity between the matched portions. The algorithm was used to provide real-time feedback to neurological patients undergoing motor rehabilitation. We acquired a dataset of multivariate time series from a sensorized long-sleeve shirt which contains 29 strain sensors distributed on the upper limb. Seven typical rehabilitation exercises were recorded in several variations, both correctly and incorrectly executed, and at various speeds, totaling a data set of 840 time series. Nearest-neighbour classifiers were built according to the outputs of open-end DTW alignments and their global counterparts on exercise pairs. The classifiers were also tested on well-known public datasets from heterogeneous domains. Nonparametric tests show that (1) on full time series the two algorithms achieve the same classification accuracy (p-value =0.32); (2) on partial time series, classifiers based on open-end DTW have a far higher accuracy (kappa=0.898 versus kappa=0.447;p<10(-5)); and (3) the prediction of the matched fraction follows closely the ground truth (root mean square <10%). The results hold for the motor rehabilitation and the other datasets tested, as well. The open-end variant of the DTW algorithm is suitable for the classification of truncated quantitative time series, even in the presence of noise. Early recognition and accurate class prediction can be achieved, provided that enough variance is available over the time span of the reference. Therefore, the proposed technique expands the use of DTW to a wider range of applications, such as real-time biofeedback systems.
Algorithmic formulation of control problems in manipulation
NASA Technical Reports Server (NTRS)
Bejczy, A. K.
1975-01-01
The basic characteristics of manipulator control algorithms are discussed. The state of the art in the development of manipulator control algorithms is briefly reviewed. Different end-point control techniques are described together with control algorithms which operate on external sensor (imaging, proximity, tactile, and torque/force) signals in realtime. Manipulator control development at JPL is briefly described and illustrated with several figures. The JPL work pays special attention to the front or operator input end of the control algorithms.
Early Obstacle Detection and Avoidance for All to All Traffic Pattern in Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Huc, Florian; Jarry, Aubin; Leone, Pierre; Moraru, Luminita; Nikoletseas, Sotiris; Rolim, Jose
This paper deals with early obstacles recognition in wireless sensor networks under various traffic patterns. In the presence of obstacles, the efficiency of routing algorithms is increased by voluntarily avoiding some regions in the vicinity of obstacles, areas which we call dead-ends. In this paper, we first propose a fast convergent routing algorithm with proactive dead-end detection together with a formal definition and description of dead-ends. Secondly, we present a generalization of this algorithm which improves performances in all to many and all to all traffic patterns. In a third part we prove that this algorithm produces paths that are optimal up to a constant factor of 2π + 1. In a fourth part we consider the reactive version of the algorithm which is an extension of a previously known early obstacle detection algorithm. Finally we give experimental results to illustrate the efficiency of our algorithms in different scenarios.
Das, Sromona; Bhattacharyya, Debasish
2017-12-01
Deposition of insulin aggregates in human body leads to dysfunctioning of several organs. Effectiveness of fruit bromelain from pineapple in prevention of insulin aggregate was investigated. Proteolyses of bromelain was done as par human digestive system and the pool of small peptides was separated from larger peptides and proteins. Under conditions of growth of insulin aggregates from its monomers, this pool of peptides restricted the reaction upto formation of oligomers of limited size. These peptides also destabilized preformed insulin aggregates to oligomers. These processes were followed fluorimetrically using Thioflavin T and 1-ANS, size-exclusion HPLC, dynamic light scattering, atomic force microscopy, and transmission electron microscopy. Sequences of insulin (A and B chains) and bromelain were aligned using Clustal W software to predict most probable sites of interactions. Synthetic tripeptides corresponding to the hydrophobic interactive sites of bromelain showed disaggregation of insulin suggesting specificity of interactions. The peptides GG and AAA serving as negative controls showed no potency in destabilization of aggregates. Disaggregation potency of the peptides was also observed when insulin was deposited on HepG2 liver cells where no formation of toxic oligomers occurred. Amyloidogenic des-octapeptide (B23-B30 of insulin) incapable of cell signaling showed cytotoxicity similar to insulin. This toxicity could be neutralized by bromelain derived peptides. FT-IR and far-UV circular dichroism analysis indicated that disaggregated insulin had structure distinctly different from that of its hexameric (native) or monomeric states. Based on the stoichiometry of interaction and irreversibility of disaggregation, the mechanism/s of the peptides and insulin interactions has been proposed. J. Cell. Biochem. 118: 4881-4896, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
45 CFR 286.255 - What quarterly reports must the Tribe submit to us?
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 2 2012-10-01 2012-10-01 false What quarterly reports must the Tribe submit to us... must the Tribe submit to us? (a) Quarterly reports. Each Tribe must collect on a monthly basis, and... Data Report: Disaggregated Data—Sections one and two. Each Tribe must file disaggregated information on...
The Disaggregation of Value-Added Test Scores to Assess Learning Outcomes in Economics Courses
ERIC Educational Resources Information Center
Walstad, William B.; Wagner, Jamie
2016-01-01
This study disaggregates posttest, pretest, and value-added or difference scores in economics into four types of economic learning: positive, retained, negative, and zero. The types are derived from patterns of student responses to individual items on a multiple-choice test. The micro and macro data from the "Test of Understanding in College…
Mahony, J B; Brown, I R
1979-11-22
Intravenous injection of (+)-lysergic acid diethylamide into young rabbits induced a transient brain-specific disaggregation of polysomes to monosomes. Investigation of the fate of mRNA revealed that brain poly(A+)mRNA was conserved. In particular, mRNA coding for brain-specific S100 protein was not degraded, nor was it released into free ribonucleoprotein particles. Following the (+)-lysergic acid diethylamide-induced disaggregation of polysomes, mRNA shifted from polysomes and accumulated on monosomes. Formation of a blocked monosome complex, which contained intact mRNA and 40-S plus 60-S ribosomal subunits but lacked nascent peptide chains, suggested that (+)-lysergic acid diethylamide inhibited brain protein synthesis at a specific stage of late initiation or early elongation.
Zhou, Dong; Zhang, Hui; Ye, Peiqing
2016-01-01
Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator.
Global climate shocks to agriculture from 1950 - 2015
NASA Astrophysics Data System (ADS)
Jackson, N. D.; Konar, M.; Debaere, P.; Sheffield, J.
2016-12-01
Climate shocks represent a major disruption to crop yields and agricultural production, yet a consistent and comprehensive database of agriculturally relevant climate shocks does not exist. To this end, we conduct a spatially and temporally disaggregated analysis of climate shocks to agriculture from 1950-2015 using a new gridded dataset. We quantify the occurrence and magnitude of climate shocks for all global agricultural areas during the growing season using a 0.25-degree spatial grid and daily time scale. We include all major crops and both temperature and precipitation extremes in our analysis. Critically, we evaluate climate shocks to all potential agricultural areas to improve projections within our time series. To do this, we use Global Agro-Ecological Zones maps from the Food and Agricultural Organization, the Princeton Global Meteorological Forcing dataset, and crop calendars from Sacks et al. (2010). We trace the dynamic evolution of climate shocks to agriculture, evaluate the spatial heterogeneity in agriculturally relevant climate shocks, and identify the crops and regions that are most prone to climate shocks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sintov, Nicole; Orosz, Michael; Schultz, P. Wesley
2015-01-01
The mission of the Personalized Energy Reduction Cyber-physical System (PERCS) is to create new possibilities for improving building operating efficiency, enhancing grid reliability, avoiding costly power interruptions, and mitigating greenhouse gas emissions. PERCS proposes to achieve these outcomes by engaging building occupants as partners in a user-centered smart service platform. Using a non-intrusive load monitoring approach, PERCS uses a single sensing point in each home to capture smart electric meter data in real time. The household energy signal is disaggregated into individual load signatures of common appliances (e.g., air conditioners), yielding near real-time appliance-level energy information. Users interact with PERCSmore » via a mobile phone platform that provides household- and appliance-level energy feedback, tailored recommendations, and a competitive game tied to energy use and behavioral changes. PERCS challenges traditional energy management approaches by directly engaging occupant as key elements in a technological system.« less
Moser, Martin; Bertram, Ulf; Peter, Karlheinz; Bode, Christoph; Ruef, Johannes
2003-04-01
Platelet GPIIb/IIIa antagonists are not only used to prevent platelet aggregation, but also in combination with thrombolytic agents for the treatment of coronary thrombi. Recent data indicate a potential of abciximab alone to dissolve thrombi in vivo. We investigated the potential of abciximab, eptifibatide, and tirofiban to dissolve platelet aggregates in vitro. Adenosine diphosphate (ADP)-induced platelet aggregation could be reversed in a concentration-dependent manner by all three GPIIb/IIIa antagonists when added after the aggregation curve reached half-maximal aggregation. The concentrations chosen are comparable with in vivo plasma concentrations in clinical applications. Disaggregation reached a maximum degree of 72.4% using 0.5 microg/ml tirofiban, 91.5% using 3.75 microg/ml eptifibatide, and 48.4% using 50 microg/ml abciximab (P < 0.05, respectively). A potential fibrinolytic activity of the GPIIb/IIIa antagonists was ruled out by preincubation with aprotinin or by a plasma clot assay. A stable model Chinese hamster ovary (CHO) cell line expressing the activated form of GPIIb/IIIa was used to confirm the disaggregation capacity of GPIIb/IIIa antagonists found in platelets. Not only abciximab, but also eptifibatide and tirofiban have the potential to disaggregate newly formed platelet clusters in vitro. Because enzyme-dependent fibrinolysis does not appear to be involved, competitive removal of fibrinogen by the receptor antagonists is the most likely mechanism.
A portable microfluidic system for rapid measurement of the erythrocyte sedimentation rate.
Isiksacan, Ziya; Erel, Ozcan; Elbuken, Caglar
2016-11-29
The erythrocyte sedimentation rate (ESR) is a frequently used 30 min or 60 min clinical test for screening of several inflammatory conditions, infections, trauma, and malignant diseases, as well as non-inflammatory conditions including prostate cancer and stroke. Erythrocyte aggregation (EA) is a physiological process where erythrocytes form face-to-face linear structures, called rouleaux, at stasis or low shear rates. In this work, we proposed a method for ESR measurement from EA. We developed a microfluidic opto-electro-mechanical system, using which we experimentally showed a significant correlation (R 2 = 0.86) between ESR and EA. The microfluidic system was shown to measure ESR from EA using fingerprick blood in 2 min. 40 μl of whole blood is filled in a disposable polycarbonate cartridge which is illuminated with a near infrared emitting diode. Erythrocytes were disaggregated under the effect of a mechanical shear force using a solenoid pinch valve. Following complete disaggregation, transmitted light through the cartridge was measured using a photodetector for 1.5 min. The intensity level is at its lowest at complete disaggregation and highest at complete aggregation. We calculated ESR from the transmitted signal profile. We also developed another microfluidic cartridge specifically for monitoring the EA process in real-time during ESR measurement. The presented system is suitable for ultrafast, low-cost, and low-sample volume measurement of ESR at the point-of-care.
Bertomeu-Motos, Arturo; Blanco, Andrea; Badesa, Francisco J; Barios, Juan A; Zollo, Loredana; Garcia-Aracil, Nicolas
2018-02-20
End-effector robots are commonly used in robot-assisted neuro-rehabilitation therapies for upper limbs where the patient's hand can be easily attached to a splint. Nevertheless, they are not able to estimate and control the kinematic configuration of the upper limb during the therapy. However, the Range of Motion (ROM) together with the clinical assessment scales offers a comprehensive assessment to the therapist. Our aim is to present a robust and stable kinematic reconstruction algorithm to accurately measure the upper limb joints using only an accelerometer placed onto the upper arm. The proposed algorithm is based on the inverse of the augmented Jaciobian as the algorithm (Papaleo, et al., Med Biol Eng Comput 53(9):815-28, 2015). However, the estimation of the elbow joint location is performed through the computation of the rotation measured by the accelerometer during the arm movement, making the algorithm more robust against shoulder movements. Furthermore, we present a method to compute the initial configuration of the upper limb necessary to start the integration method, a protocol to manually measure the upper arm and forearm lengths, and a shoulder position estimation. An optoelectronic system was used to test the accuracy of the proposed algorithm whilst healthy subjects were performing upper limb movements holding the end effector of the seven Degrees of Freedom (DoF) robot. In addition, the previous and the proposed algorithms were studied during a neuro-rehabilitation therapy assisted by the 'PUPArm' planar robot with three post-stroke patients. The proposed algorithm reports a Root Mean Square Error (RMSE) of 2.13cm in the elbow joint location and 1.89cm in the wrist joint location with high correlation. These errors lead to a RMSE about 3.5 degrees (mean of the seven joints) with high correlation in all the joints with respect to the real upper limb acquired through the optoelectronic system. Then, the estimation of the upper limb joints through both algorithms reveal an instability on the previous when shoulder movement appear due to the inevitable trunk compensation in post-stroke patients. The proposed algorithm is able to accurately estimate the human upper limb joints during a neuro-rehabilitation therapy assisted by end-effector robots. In addition, the implemented protocol can be followed in a clinical environment without optoelectronic systems using only one accelerometer attached in the upper arm. Thus, the ROM can be perfectly determined and could become an objective assessment parameter for a comprehensive assessment.
ERIC Educational Resources Information Center
Zhang, Qiantao; Larkin, Charles; Lucey, Brian M.
2017-01-01
While there has been a long history of modelling the economic impact of higher education institutions (HEIs), little research has been undertaken in the context of Ireland. This paper provides, for the first time, a disaggregated input-output table for Ireland's higher education sector. The picture painted overall is a higher education sector that…
Boado, Ruben J; Zhang, Yufeng; Zhang, Yun; Xia, Chun-Fang; Pardridge, William M
2007-01-01
Delivery of monoclonal antibody therapeutics across the blood-brain barrier is an obstacle to the diagnosis or therapy of CNS disease with antibody drugs. The immune therapy of Alzheimer's disease attempts to disaggregate the amyloid plaque of Alzheimer's disease with an anti-Abeta monoclonal antibody. The present work is based on a three-step model of immune therapy of Alzheimer's disease: (1) influx of the anti-Abeta monoclonal antibody across the blood-brain barrier in the blood to brain direction, (2) binding and disaggregation of Abeta fibrils in brain, and (3) efflux of the anti-Abeta monoclonal antibody across the blood-brain barrier in the brain to blood direction. This is accomplished with the genetic engineering of a trifunctional fusion antibody that binds (1) the human insulin receptor, which mediates the influx from blood to brain across the blood-brain barrier, (2) the Abeta fibril to disaggregate amyloid plaque, and (3) the Fc receptor, which mediates the efflux from brain to blood across the blood-brain barrier. This fusion protein is a new antibody-based therapeutic for Alzheimer's disease that is specifically engineered to cross the human blood-brain barrier in both directions.
Huang, P Y; Hellums, J D
1993-01-01
A population balance equation (PBE) mathematical model for analyzing platelet aggregation kinetics was developed in Part I (Huang, P. Y., and J. D. Hellums. 1993. Biophys. J. 65: 334-343) of a set of three papers. In this paper, Part II, platelet aggregation and related reactions are studied in the uniform, known shear stress field of a rotational viscometer, and interpreted by means of the model. Experimental determinations are made of the platelet-aggregate particle size distributions as they evolve in time under the aggregating influence of shear stress. The PBE model is shown to give good agreement with experimental determinations when either a reversible (aggregation and disaggregation) or an irreversible (no disaggregation) form of the model is used. This finding suggests that for the experimental conditions studied disaggregation processes are of only secondary importance. During shear-induced platelet aggregation, only a small fraction of platelet collisions result in the binding together of the involved platelets. The modified collision efficiency is approximately zero for shear rates below 3000 s-1. It increases with shear rates above 3000 s-1 to about 0.01 for a shear rate of 8000 s-1. Addition of platelet chemical agonists yields order of magnitude increases in collision efficiency. The collision efficiency for shear-induced platelet aggregation is about an order of magnitude less at 37 degrees C than at 24 degrees C. The PBE model gives a much more accurate representation of aggregation kinetics than an earlier model based on a monodispersed particle size distribution. PMID:8369442
NASA Astrophysics Data System (ADS)
Bárdossy, András; Pegram, Geoffrey
2017-01-01
The use of radar measurements for the space time estimation of precipitation has for many decades been a central topic in hydro-meteorology. In this paper we are interested specifically in daily and sub-daily extreme values of precipitation at gauged or ungauged locations which are important for design. The purpose of the paper is to develop a methodology to combine daily precipitation observations and radar measurements to estimate sub-daily extremes at point locations. Radar data corrected using precipitation-reflectivity relationships lead to biased estimations of extremes. Different possibilities of correcting systematic errors using the daily observations are investigated. Observed gauged daily amounts are interpolated to unsampled points and subsequently disaggregated using the sub-daily values obtained by the radar. Different corrections based on the spatial variability and the subdaily entropy of scaled rainfall distributions are used to provide unbiased corrections of short duration extremes. Additionally a statistical procedure not based on a matching day by day correction is tested. In this last procedure as we are only interested in rare extremes, low to medium values of rainfall depth were neglected leaving a small number of L days of ranked daily maxima in each set per year, whose sum typically comprises about 50% of each annual rainfall total. The sum of these L day maxima is first iterpolated using a Kriging procedure. Subsequently this sum is disaggregated to daily values using a nearest neighbour procedure. The daily sums are then disaggregated by using the relative values of the biggest L radar based days. Of course, the timings of radar and gauge maxima can be different, so the method presented here uses radar for disaggregating daily gauge totals down to 15 min intervals in order to extract the maxima of sub-hourly through to daily rainfall. The methodologies were tested in South Africa, where an S-band radar operated relatively continuously at Bethlehem from 1998 to 2003, whose scan at 1.5 km above ground [CAPPI] overlapped a dense (10 km spacing) set of 45 pluviometers recording in the same 6-year period. This valuable set of data was obtained from each of 37 selected radar pixels [1 km square in plan] which contained a pluviometer not masked out by the radar foot-print. The pluviometer data were also aggregated to daily totals, for the same purpose. The extremes obtained using disaggregation methods were compared to the observed extremes in a cross validation procedure. The unusual and novel goal was not to obtain the reproduction of the precipitation matching in space and time, but to obtain frequency distributions of the point extremes, which we found to be stable.
Choice of crystal surface finishing for a dual-ended readout depth-of-interaction (DOI) detector.
Fan, Peng; Ma, Tianyu; Wei, Qingyang; Yao, Rutao; Liu, Yaqiang; Wang, Shi
2016-02-07
The objective of this study was to choose the crystal surface finishing for a dual-ended readout (DER) DOI detector. Through Monte Carlo simulations and experimental studies, we evaluated 4 crystal surface finishing options as combinations of crystal surface polishing (diffuse or specular) and reflector (diffuse or specular) options on a DER detector. We also tested one linear and one logarithm DOI calculation algorithm. The figures of merit used were DOI resolution, DOI positioning error, and energy resolution. Both the simulation and experimental results show that (1) choosing a diffuse type in either surface polishing or reflector would improve DOI resolution but degrade energy resolution; (2) crystal surface finishing with a diffuse polishing combined with a specular reflector appears a favorable candidate with a good balance of DOI and energy resolution; and (3) the linear and logarithm DOI calculation algorithms show overall comparable DOI error, and the linear algorithm was better for photon interactions near the ends of the crystal while the logarithm algorithm was better near the center. These results provide useful guidance in DER DOI detector design in choosing the crystal surface finishing and DOI calculation methods.
Toward improved health: disaggregating Asian American and Native Hawaiian/Pacific Islander data.
Srinivasan, S; Guillermo, T
2000-01-01
The 2000 census, with its option for respondents to mark 1 or more race categories, is the first US census to recognize the multiethnic nature of all US populations but especially Asian Americans and Native Hawaiians/Pacific Islanders. If Asian Americans and Native Hawaiians/Pacific Islanders have for the most part been "invisible" in policy debates regarding such matters as health care and immigration, it has been largely because of a paucity of data stemming from the lack of disaggregated data on this heterogeneous group of peoples. Studies at all levels should adhere to these disaggregated classifications. Also, in addition to oversampling procedures, there should be greater regional/local funding for studies in regions where Asian American and Native Hawaiian/Pacific Islander populations are substantial. PMID:11076241
Simulating optoelectronic systems for remote sensing with SENSOR
NASA Astrophysics Data System (ADS)
Boerner, Anko
2003-04-01
The consistent end-to-end simulation of airborne and spaceborne remote sensing systems is an important task and sometimes the only way for the adaptation and optimization of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software ENvironment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. It allows the simulation of a wide range of optoelectronic systems for remote sensing. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. Part three consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimization requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and examples of its use are given. The verification of SENSOR is demonstrated.
An End-to-End Loss Discrimination Scheme for Multimedia Transmission over Wireless IP Networks
NASA Astrophysics Data System (ADS)
Zhao, Hai-Tao; Dong, Yu-Ning; Li, Yang
As the rapid growth of wireless IP networks, wireless IP access networks have a lot of potential applications in a variety of fields in civilian and military environments. Many of these applications, such as realtime audio/video streaming, will require some form of end-to-end QoS assurance. In this paper, an algorithm WMPLD (Wireless Multimedia Packet Loss Discrimination) is proposed for multimedia transmission control over wired-wireless hybrid IP networks. The relationship between packet length and packet loss rate in the Gilbert wireless error model is investigated. Furthermore, the algorithm can detect the nature of packet losses by sending large and small packets alternately, and control the sending rate of nodes. In addition, by means of updating factor K, this algorithm can adapt to the changes of network states quickly. Simulation results show that, compared to previous algorithms, WMPLD algorithm can improve the networks throughput as well as reduce the congestion loss rate in various situations.
Friedman, Lee; Rigas, Ioannis; Abdulin, Evgeny; Komogortsev, Oleg V
2018-05-15
Nystrӧm and Holmqvist have published a method for the classification of eye movements during reading (ONH) (Nyström & Holmqvist, 2010). When we applied this algorithm to our data, the results were not satisfactory, so we modified the algorithm (now the MNH) to better classify our data. The changes included: (1) reducing the amount of signal filtering, (2) excluding a new type of noise, (3) removing several adaptive thresholds and replacing them with fixed thresholds, (4) changing the way that the start and end of each saccade was determined, (5) employing a new algorithm for detecting PSOs, and (6) allowing a fixation period to either begin or end with noise. A new method for the evaluation of classification algorithms is presented. It was designed to provide comprehensive feedback to an algorithm developer, in a time-efficient manner, about the types and numbers of classification errors that an algorithm produces. This evaluation was conducted by three expert raters independently, across 20 randomly chosen recordings, each classified by both algorithms. The MNH made many fewer errors in determining when saccades start and end, and it also detected some fixations and saccades that the ONH did not. The MNH fails to detect very small saccades. We also evaluated two additional algorithms: the EyeLink Parser and a more current, machine-learning-based algorithm. The EyeLink Parser tended to find more saccades that ended too early than did the other methods, and we found numerous problems with the output of the machine-learning-based algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Saqib; Wang, Guojun; Cottrell, Roger Leslie
PingER (Ping End-to-End Reporting) is a worldwide end-to-end Internet performance measurement framework. It was developed by the SLAC National Accelerator Laboratory, Stanford, USA and running from the last 20 years. It has more than 700 monitoring agents and remote sites which monitor the performance of Internet links around 170 countries of the world. At present, the size of the compressed PingER data set is about 60 GB comprising of 100,000 flat files. The data is publicly available for valuable Internet performance analyses. However, the data sets suffer from missing values and anomalies due to congestion, bottleneck links, queuing overflow, networkmore » software misconfiguration, hardware failure, cable cuts, and social upheavals. Therefore, the objective of this paper is to detect such performance drops or spikes labeled as anomalies or outliers for the PingER data set. In the proposed approach, the raw text files of the data set are transformed into a PingER dimensional model. The missing values are imputed using the k-NN algorithm. The data is partitioned into similar instances using the k-means clustering algorithm. Afterward, clustering is integrated with the Local Outlier Factor (LOF) using the Cluster Based Local Outlier Factor (CBLOF) algorithm to detect the anomalies or outliers from the PingER data. Lastly, anomalies are further analyzed to identify the time frame and location of the hosts generating the major percentage of the anomalies in the PingER data set ranging from 1998 to 2016.« less
Ali, Saqib; Wang, Guojun; Cottrell, Roger Leslie; ...
2018-05-28
PingER (Ping End-to-End Reporting) is a worldwide end-to-end Internet performance measurement framework. It was developed by the SLAC National Accelerator Laboratory, Stanford, USA and running from the last 20 years. It has more than 700 monitoring agents and remote sites which monitor the performance of Internet links around 170 countries of the world. At present, the size of the compressed PingER data set is about 60 GB comprising of 100,000 flat files. The data is publicly available for valuable Internet performance analyses. However, the data sets suffer from missing values and anomalies due to congestion, bottleneck links, queuing overflow, networkmore » software misconfiguration, hardware failure, cable cuts, and social upheavals. Therefore, the objective of this paper is to detect such performance drops or spikes labeled as anomalies or outliers for the PingER data set. In the proposed approach, the raw text files of the data set are transformed into a PingER dimensional model. The missing values are imputed using the k-NN algorithm. The data is partitioned into similar instances using the k-means clustering algorithm. Afterward, clustering is integrated with the Local Outlier Factor (LOF) using the Cluster Based Local Outlier Factor (CBLOF) algorithm to detect the anomalies or outliers from the PingER data. Lastly, anomalies are further analyzed to identify the time frame and location of the hosts generating the major percentage of the anomalies in the PingER data set ranging from 1998 to 2016.« less
Solving Inverse Kinematics of Robot Manipulators by Means of Meta-Heuristic Optimisation
NASA Astrophysics Data System (ADS)
Wichapong, Kritsada; Bureerat, Sujin; Pholdee, Nantiwat
2018-05-01
This paper presents the use of meta-heuristic algorithms (MHs) for solving inverse kinematics of robot manipulators based on using forward kinematic. Design variables are joint angular displacements used to move a robot end-effector to the target in the Cartesian space while the design problem is posed to minimize error between target points and the positions of the robot end-effector. The problem is said to be a dynamic problem as the target points always changed by a robot user. Several well established MHs are used to solve the problem and the results obtained from using different meta-heuristics are compared based on the end-effector error and searching speed of the algorithms. From the study, the best performer will be obtained for setting as the baseline for future development of MH-based inverse kinematic solving.
Algorithm and Architecture Independent Benchmarking with SEAK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.
2016-05-23
Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, andmore » weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less
Chang, Sung-A; Lee, Sang-Chol; Kim, Eun-Young; Hahm, Seung-Hee; Jang, Shin Yi; Park, Sung-Ji; Choi, Jin-Oh; Park, Seung Woo; Choe, Yeon Hyeon; Oh, Jae K
2011-08-01
With recent developments in echocardiographic technology, a new system using real-time three-dimensional echocardiography (RT3DE) that allows single-beat acquisition of the entire volume of the left ventricle and incorporates algorithms for automated border detection has been introduced. Provided that these techniques are acceptably reliable, three-dimensional echocardiography may be much more useful for clinical practice. The aim of this study was to evaluate the feasibility and accuracy of left ventricular (LV) volume measurements by RT3DE using the single-beat full-volume capture technique. One hundred nine consecutive patients scheduled for cardiac magnetic resonance imaging and RT3DE using the single-beat full-volume capture technique on the same day were recruited. LV end-systolic volume, end-diastolic volume, and ejection fraction were measured using an auto-contouring algorithm from data acquired on RT3DE. The data were compared with the same measurements obtained using cardiac magnetic resonance imaging. Volume measurements on RT3DE with single-beat full-volume capture were feasible in 84% of patients. Both interobserver and intraobserver variability of three-dimensional measurements of end-systolic and end-diastolic volumes showed excellent agreement. Pearson's correlation analysis showed a close correlation of end-systolic and end-diastolic volumes between RT3DE and cardiac magnetic resonance imaging (r = 0.94 and r = 0.91, respectively, P < .0001 for both). Bland-Altman analysis showed reasonable limits of agreement. After application of the auto-contouring algorithm, the rate of successful auto-contouring (cases requiring minimal manual corrections) was <50%. RT3DE using single-beat full-volume capture is an easy and reliable technique to assess LV volume and systolic function in clinical practice. However, the image quality and low frame rate still limit its application for dilated left ventricles, and the automated volume analysis program needs more development to make it clinically efficacious. Copyright © 2011 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.
Zhou, Dong; Zhang, Hui; Ye, Peiqing
2016-01-01
Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator. PMID:27110274
Advertising media and cigarette demand.
Goel, Rajeev K
2011-01-01
Using state-level panel data for the USA spanning three decades, this research estimates the demand for cigarettes. The main contribution lies in studying the effects of cigarette advertising disaggregated across five qualitatively different groups. Results show cigarette demand to be near unit elastic, the income effects to be generally insignificant and border price effects and habit effects to be significant. Regarding advertising effects, aggregate cigarette advertising has a negative effect on smoking. Important differences across advertising media emerge when cigarette advertising is disaggregated. The effects of public entertainment and Internet cigarette advertising are stronger than those of other media. Anti-smoking messages accompanying print cigarette advertising seem relatively more effective. Implications for smoking control policy are discussed.
Phillips, Susan P; Hammarström, Anne
2011-01-01
Limited existing research on gender inequities suggests that for men workplace atmosphere shapes wellbeing while women are less susceptible to socioeconomic or work status but vulnerable to home inequities. Using the 2007 Northern Swedish Cohort (n = 773) we identified relative contributions of perceived gender inequities in relationships, financial strain, and education to self-reported health to determine whether controlling for sex, examining interactions between sex and other social variables, or sex-disaggregating data yielded most information about sex differences. Men had lower education but also less financial strain, and experienced less gender inequity. Overall, low education and financial strain detracted from health. However, sex-disaggregated data showed this to be true for women, whereas for men only gender inequity at home affected health. In the relatively egalitarian Swedish environment where women more readily enter all work arenas and men often provide parenting, traditional primacy of the home environment (for women) and the work environment (for men) in shaping health is reversing such that perceived domestic gender inequity has a significant health impact on men, while for women only education and financial strain are contributory. These outcomes were identified only when data were sex-disaggregated.
Heat shock protein (Hsp) 70 is an activator of the Hsp104 motor.
Lee, Jungsoon; Kim, Ji-Hyun; Biter, Amadeo B; Sielaff, Bernhard; Lee, Sukyeong; Tsai, Francis T F
2013-05-21
Heat shock protein (Hsp) 104 is a ring-forming, protein-remodeling machine that harnesses the energy of ATP binding and hydrolysis to drive protein disaggregation. Although Hsp104 is an active ATPase, the recovery of functional protein requires the species-specific cooperation of the Hsp70 system. However, like Hsp104, Hsp70 is an active ATPase, which recognizes aggregated and aggregation-prone proteins, making it difficult to differentiate the mechanistic roles of Hsp104 and Hsp70 during protein disaggregation. Mapping the Hsp70-binding sites in yeast Hsp104 using peptide array technology and photo-cross-linking revealed a striking conservation of the primary Hsp70-binding motifs on the Hsp104 middle-domain across species, despite lack of sequence identity. Remarkably, inserting a Strep-Tactin binding motif at the spatially conserved Hsp70-binding site elicits the Hsp104 protein disaggregating activity that now depends on Strep-Tactin but no longer requires Hsp70/40. Consistent with a Strep-Tactin-dependent activation step, we found that full-length Hsp70 on its own could activate the Hsp104 hexamer by promoting intersubunit coordination, suggesting that Hsp70 is an activator of the Hsp104 motor.
NASA Astrophysics Data System (ADS)
Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen
2012-03-01
In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.
Advanced End-to-end Simulation for On-board Processing (AESOP)
NASA Technical Reports Server (NTRS)
Mazer, Alan S.
1994-01-01
Developers of data compression algorithms typically use their own software together with commercial packages to implement, evaluate and demonstrate their work. While convenient for an individual developer, this approach makes it difficult to build on or use another's work without intimate knowledge of each component. When several people or groups work on different parts of the same problem, the larger view can be lost. What's needed is a simple piece of software to stand in the gap and link together the efforts of different people, enabling them to build on each other's work, and providing a base for engineers and scientists to evaluate the parts as a cohesive whole and make design decisions. AESOP (Advanced End-to-end Simulation for On-board Processing) attempts to meet this need by providing a graphical interface to a developer-selected set of algorithms, interfacing with compiled code and standalone programs, as well as procedures written in the IDL and PV-Wave command languages. As a proof of concept, AESOP is outfitted with several data compression algorithms integrating previous work on different processors (AT&T DSP32C, TI TMS320C30, SPARC). The user can specify at run-time the processor on which individual parts of the compression should run. Compressed data is then fed through simulated transmission and uncompression to evaluate the effects of compression parameters, noise and error correction algorithms. The following sections describe AESOP in detail. Section 2 describes fundamental goals for usability. Section 3 describes the implementation. Sections 4 through 5 describe how to add new functionality to the system and present the existing data compression algorithms. Sections 6 and 7 discuss portability and future work.
Disaggregating Hot Water Use and Predicting Hot Water Waste in Five Test Homes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson, H.; Wade, J.
2014-04-01
While it is important to make the equipment (or 'plant') in a residential hot water system more efficient, the hot water distribution system also affects overall system performance and energy use. Energy wasted in heating water that is not used is estimated to be on the order of 10 to 30 percent of total domestic hot water (DHW) energy use. This field monitoring project installed temperature sensors on the distribution piping (on trunks and near fixtures) and programmed a data logger to collect data at 5 second intervals whenever there was a hot water draw. This data was used tomore » assign hot water draws to specific end uses in the home as well as to determine the portion of each hot water that was deemed useful (i.e., above a temperature threshold at the fixture). Five houses near Syracuse NY were monitored. Overall, the procedures to assign water draws to each end use were able to successfully assign about 50% of the water draws, but these assigned draws accounted for about 95% of the total hot water use in each home. The amount of hot water deemed as useful ranged from low of 75% at one house to a high of 91% in another. At three of the houses, new water heaters and distribution improvements were implemented during the monitoring period and the impact of these improvements on hot water use and delivery efficiency were evaluated.« less
Disaggregating Hot Water Use and Predicting Hot Water Waste in Five Test Homes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson, Hugh; Wade, Jeremy
2014-04-01
While it is important to make the equipment (or "plant") in a residential hot water system more efficient, the hot water distribution system also affects overall system performance and energy use. Energy wasted in heating water that is not used is estimated to be on the order of 10%-30% of total domestic hot water (DHW) energy use. This field monitoring project installed temperature sensors on the distribution piping (on trunks and near fixtures) in five houses near Syracuse, NY, and programmed a data logger to collect data at 5 second intervals whenever there was a hot water draw. This datamore » was used to assign hot water draws to specific end uses in the home as well as to determine the portion of each hot water that was deemed useful (i.e., above a temperature threshold at the fixture). Overall, the procedures to assign water draws to each end use were able to successfully assign about 50% of the water draws, but these assigned draws accounted for about 95% of the total hot water use in each home. The amount of hot water deemed as useful ranged from low of 75% at one house to a high of 91% in another. At three of the houses, new water heaters and distribution improvements were implemented during the monitoring period and the impact of these improvements on hot water use and delivery efficiency were evaluated.« less
Integration of Multiple Data Sources to Simulate the Dynamics of Land Systems
Deng, Xiangzheng; Su, Hongbo; Zhan, Jinyan
2008-01-01
In this paper we present and develop a new model, which we have called Dynamics of Land Systems (DLS). The DLS model is capable of integrating multiple data sources to simulate the dynamics of a land system. Three main modules are incorporated in DLS: a spatial regression module, to explore the relationship between land uses and influencing factors, a scenario analysis module of the land uses of a region during the simulation period and a spatial disaggregation module, to allocate land use changes from a regional level to disaggregated grid cells. A case study on Taips County in North China is incorporated in this paper to test the functionality of DLS. The simulation results under the baseline, economic priority and environmental scenarios help to understand the land system dynamics and project near future land-use trajectories of a region, in order to focus management decisions on land uses and land use planning. PMID:27879726
Murray, Amber N; Palhano, Fernando L; Bieschke, Jan; Kelly, Jeffery W
2013-01-01
The accumulation of cross-β-sheet amyloid fibrils is the hallmark of amyloid diseases. Recently, we reported the discovery of amyloid disaggregase activities in extracts from mammalian cells and Caenorhabditis elegans. However, we have discovered a problem with the interpretation of our previous results as Aβ disaggregation in vitro. Here, we show that Aβ fibrils adsorb to the plastic surface of multiwell plates and Eppendorf tubes. This adsorption is markedly increased in the presence of complex biological mixtures subjected to a denaturing air-water interface. The time-dependent loss of thioflavin T fluorescence that we interpreted previously as disaggregation is due to increased adsorption of Aβ amyloid to the surfaces of multiwell plates and Eppendorf tubes in the presence of biological extracts. As the proteins in biological extracts denature over time at the air-water interface due to agitation/shaking, their adsorption increases, in turn promoting adsorption of amyloid fibrils. We delineate important control experiments that quantify the extent of amyloid adsorption to the surface of plastic and quartz containers. Based on the results described in this article, we conclude that our interpretation of the kinetic fibril disaggregation assay data previously reported in Bieschke et al., Protein Sci 2009;18:2231–2241 and Murray et al., Protein Sci 2010;19:836–846 is invalid when used as evidence for a disaggregase activity. Thus, we correct the two prior publications reporting that worm or mammalian cell extracts disaggregate Aβ amyloid fibrils in vitro at 37°C (see Corrigenda in this issue of Protein Science). We apologize for misinterpreting our previous data and for any confounding experimental efforts this may have caused. PMID:23963844
Murray, Amber N; Palhano, Fernando L; Bieschke, Jan; Kelly, Jeffery W
2013-11-01
The accumulation of cross-β-sheet amyloid fibrils is the hallmark of amyloid diseases. Recently, we reported the discovery of amyloid disaggregase activities in extracts from mammalian cells and Caenorhabditis elegans. However, we have discovered a problem with the interpretation of our previous results as Aβ disaggregation in vitro. Here, we show that Aβ fibrils adsorb to the plastic surface of multiwell plates and Eppendorf tubes. This adsorption is markedly increased in the presence of complex biological mixtures subjected to a denaturing air-water interface. The time-dependent loss of thioflavin T fluorescence that we interpreted previously as disaggregation is due to increased adsorption of Aβ amyloid to the surfaces of multiwell plates and Eppendorf tubes in the presence of biological extracts. As the proteins in biological extracts denature over time at the air-water interface due to agitation/shaking, their adsorption increases, in turn promoting adsorption of amyloid fibrils. We delineate important control experiments that quantify the extent of amyloid adsorption to the surface of plastic and quartz containers. Based on the results described in this article, we conclude that our interpretation of the kinetic fibril disaggregation assay data previously reported in Bieschke et al., Protein Sci 2009;18:2231-2241 and Murray et al., Protein Sci 2010;19:836-846 is invalid when used as evidence for a disaggregase activity. Thus, we correct the two prior publications reporting that worm or mammalian cell extracts disaggregate Aβ amyloid fibrils in vitro at 37°C (see Corrigenda in this issue of Protein Science). We apologize for misinterpreting our previous data and for any confounding experimental efforts this may have caused. © 2013 The Protein Society.
Ciezka, Magdalena; Acosta, Milena; Herranz, Cristina; Canals, Josep M; Pumarola, Martí; Candiota, Ana Paula; Arús, Carles
2016-08-01
The initial aim of this study was to generate a transplantable glial tumour model of low-intermediate grade by disaggregation of a spontaneous tumour mass from genetically engineered models (GEM). This should result in an increased tumour incidence in comparison to GEM animals. An anaplastic oligoastrocytoma (OA) tumour of World Health Organization (WHO) grade III was obtained from a female GEM mouse with the S100β-v-erbB/inK4a-Arf (+/-) genotype maintained in the C57BL/6 background. The tumour tissue was disaggregated; tumour cells from it were grown in aggregates and stereotactically injected into C57BL/6 mice. Tumour development was followed using Magnetic Resonance Imaging (MRI), while changes in the metabolomics pattern of the masses were evaluated by Magnetic Resonance Spectroscopy/Spectroscopic Imaging (MRS/MRSI). Final tumour grade was evaluated by histopathological analysis. The total number of tumours generated from GEM cells from disaggregated tumour (CDT) was 67 with up to 100 % penetrance, as compared to 16 % in the local GEM model, with an average survival time of 66 ± 55 days, up to 4.3-fold significantly higher than the standard GL261 glioblastoma (GBM) tumour model. Tumours produced by transplantation of cells freshly obtained from disaggregated GEM tumour were diagnosed as WHO grade III anaplastic oligodendroglioma (ODG) and OA, while tumours produced from a previously frozen sample were diagnosed as WHO grade IV GBM. We successfully grew CDT and generated tumours from a grade III GEM glial tumour. Freezing and cell culture protocols produced progression to grade IV GBM, which makes the developed transplantable model qualify as potential secondary GBM model in mice.
Linning, Shannon J; Andresen, Martin A; Brantingham, Paul J
2017-12-01
This study investigates whether crime patterns fluctuate periodically throughout the year using data containing different property crime types in two Canadian cities with differing climates. Using police report data, a series of ordinary least squares (OLS; Vancouver, British Columbia) and negative binomial (Ottawa, Ontario) regressions were employed to examine the corresponding temporal patterns of property crime in Vancouver (2003-2013) and Ottawa (2006-2008). Moreover, both aggregate and disaggregate models were run to examine whether different weather and temporal variables had a distinctive impact on particular offences. Overall, results suggest that cities that experience greater variations in weather throughout the year have more distinct increases of property offences in the summer months and that different climate variables affect certain crime types, thus advocating for disaggregate analysis in the future.
Demand modelling of passenger air travel: An analysis and extension, volume 2
NASA Technical Reports Server (NTRS)
Jacobson, I. D.
1978-01-01
Previous intercity travel demand models in terms of their ability to predict air travel in a useful way and the need for disaggregation in the approach to demand modelling are evaluated. The viability of incorporating non-conventional factors (i.e. non-econometric, such as time and cost) in travel demand forecasting models are determined. The investigation of existing models is carried out in order to provide insight into their strong points and shortcomings. The model is characterized as a market segmentation model. This is a consequence of the strengths of disaggregation and its natural evolution to a usable aggregate formulation. The need for this approach both pedagogically and mathematically is discussed. In addition this volume contains two appendices which should prove useful to the non-specialist in the area.
Namazi-Rad, Mohammad-Reza; Mokhtarian, Payam; Perez, Pascal
2014-01-01
Generating a reliable computer-simulated synthetic population is necessary for knowledge processing and decision-making analysis in agent-based systems in order to measure, interpret and describe each target area and the human activity patterns within it. In this paper, both synthetic reconstruction (SR) and combinatorial optimisation (CO) techniques are discussed for generating a reliable synthetic population for a certain geographic region (in Australia) using aggregated- and disaggregated-level information available for such an area. A CO algorithm using the quadratic function of population estimators is presented in this paper in order to generate a synthetic population while considering a two-fold nested structure for the individuals and households within the target areas. The baseline population in this study is generated from the confidentialised unit record files (CURFs) and 2006 Australian census tables. The dynamics of the created population is then projected over five years using a dynamic micro-simulation model for individual- and household-level demographic transitions. This projection is then compared with the 2011 Australian census. A prediction interval is provided for the population estimates obtained by the bootstrapping method, by which the variability structure of a predictor can be replicated in a bootstrap distribution. PMID:24733522
End-point detection in potentiometric titration by continuous wavelet transform.
Jakubowska, Małgorzata; Baś, Bogusław; Kubiak, Władysław W
2009-10-15
The aim of this work was construction of the new wavelet function and verification that a continuous wavelet transform with a specially defined dedicated mother wavelet is a useful tool for precise detection of end-point in a potentiometric titration. The proposed algorithm does not require any initial information about the nature or the type of analyte and/or the shape of the titration curve. The signal imperfection, as well as random noise or spikes has no influence on the operation of the procedure. The optimization of the new algorithm was done using simulated curves and next experimental data were considered. In the case of well-shaped and noise-free titration data, the proposed method gives the same accuracy and precision as commonly used algorithms. But, in the case of noisy or badly shaped curves, the presented approach works good (relative error mainly below 2% and coefficients of variability below 5%) while traditional procedures fail. Therefore, the proposed algorithm may be useful in interpretation of the experimental data and also in automation of the typical titration analysis, specially in the case when random noise interfere with analytical signal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Ning; Gombos, Gergely; Mousavi, Mirrasoul J.
A new fault location algorithm for two-end series-compensated double-circuit transmission lines utilizing unsynchronized two-terminal current phasors and local voltage phasors is presented in this paper. The distributed parameter line model is adopted to take into account the shunt capacitance of the lines. The mutual coupling between the parallel lines in the zero-sequence network is also considered. The boundary conditions under different fault types are used to derive the fault location formulation. The developed algorithm directly uses the local voltage phasors on the line side of series compensation (SC) and metal oxide varistor (MOV). However, when potential transformers are not installedmore » on the line side of SC and MOVs for the local terminal, these measurements can be calculated from the local terminal bus voltage and currents by estimating the voltages across the SC and MOVs. MATLAB SimPowerSystems is used to generate cases under diverse fault conditions to evaluating accuracy. The simulation results show that the proposed algorithm is qualified for practical implementation.« less
SENSOR: a tool for the simulation of hyperspectral remote sensing systems
NASA Astrophysics Data System (ADS)
Börner, Anko; Wiest, Lorenz; Keller, Peter; Reulke, Ralf; Richter, Rolf; Schaepman, Michael; Schläpfer, Daniel
The consistent end-to-end simulation of airborne and spaceborne earth remote sensing systems is an important task, and sometimes the only way for the adaptation and optimisation of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software Environment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray-tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. The third part consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimisation requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and first examples of its use are given. The verification of SENSOR is demonstrated. This work is closely related to the Airborne PRISM Experiment (APEX), an airborne imaging spectrometer funded by the European Space Agency.
Optimization of Selected Remote Sensing Algorithms for Embedded NVIDIA Kepler GPU Architecture
NASA Technical Reports Server (NTRS)
Riha, Lubomir; Le Moigne, Jacqueline; El-Ghazawi, Tarek
2015-01-01
This paper evaluates the potential of embedded Graphic Processing Units in the Nvidias Tegra K1 for onboard processing. The performance is compared to a general purpose multi-core CPU and full fledge GPU accelerator. This study uses two algorithms: Wavelet Spectral Dimension Reduction of Hyperspectral Imagery and Automated Cloud-Cover Assessment (ACCA) Algorithm. Tegra K1 achieved 51 for ACCA algorithm and 20 for the dimension reduction algorithm, as compared to the performance of the high-end 8-core server Intel Xeon CPU with 13.5 times higher power consumption.
Shah, Sohil Atul
2017-01-01
Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank. PMID:28851838
Kamel Boulos, Maged N; Cai, Qiang; Padget, Julian A; Rushton, Gerard
2006-04-01
Confidentiality constraints often preclude the release of disaggregate data about individuals, which limits the types and accuracy of the results of geographical health analyses that could be done. Access to individually geocoded (disaggregate) data often involves lengthy and cumbersome procedures through review boards and committees for approval (and sometimes is not possible). Moreover, current data confidentiality-preserving solutions compatible with fine-level spatial analyses either lack flexibility or yield less than optimal results (because of confidentiality-preserving changes they introduce to disaggregate data), or both. In this paper, we present a simulation case study to illustrate how some analyses cannot be (or will suffer if) done on aggregate data. We then quickly review some existing data confidentiality-preserving techniques, and move on to explore a solution based on software agents with the potential of providing flexible, controlled (software-only) access to unmodified confidential disaggregate data and returning only results that do not expose any person-identifiable details. The solution is thus appropriate for micro-scale geographical analyses where no person-identifiable details are required in the final results (i.e., only aggregate results are needed). Our proposed software agent technique also enables post-coordinated analyses to be designed and carried out on the confidential database(s), as needed, compared to a more conventional solution based on the Web Services model that would only support a rigid, pre-coordinated (pre-determined) and rather limited set of analyses. The paper also provides an exploratory discussion of mobility, security, and trust issues associated with software agents, as well as possible directions/solutions to address these issues, including the use of virtual organizations. Successful partnerships between stakeholder organizations, proper collaboration agreements, clear policies, and unambiguous interpretations of laws and regulations are also much needed to support and ensure the success of any technological solution.
NASA Astrophysics Data System (ADS)
Malbéteau, Yoann; Merlin, Olivier; Molero, Beatriz; Rüdiger, Christoph; Bacon, Stephan
2016-03-01
Validating coarse-scale satellite soil moisture data still represents a big challenge, notably due to the large mismatch existing between the spatial resolution (> 10 km) of microwave radiometers and the representativeness scale (several m) of localized in situ measurements. This study aims to examine the potential of DisPATCh (Disaggregation based on Physical and Theoretical scale Change) for validating SMOS (Soil Moisture and Ocean Salinity) and AMSR-E (Advanced Microwave Scanning Radiometer-Earth observation system) level-3 soil moisture products. The ∽40-50 km resolution SMOS and AMSR-E data are disaggregated at 1 km resolution over the Murrumbidgee catchment in Southeastern Australia during a one year period in 2010-2011, and the satellite products are compared with the in situ measurements of 38 stations distributed within the study area. It is found that disaggregation improves the mean difference, correlation coefficient and slope of the linear regression between satellite and in situ data in 77%, 92% and 94% of cases, respectively. Nevertheless, the downscaling efficiency is lower in winter than during the hotter months when DisPATCh performance is optimal. Consistently, better results are obtained in the semi-arid than in a temperate zone of the catchment. In the semi-arid Yanco region, disaggregation in summer increases the correlation coefficient from 0.63 to 0.78 and from 0.42 to 0.71 for SMOS and AMSR-E in morning overpasses and from 0.37 to 0.63 and from 0.47 to 0.73 for SMOS and AMSR-E in afternoon overpasses, respectively. DisPATCh has strong potential in low vegetated semi-arid areas where it can be used as a tool to evaluate coarse-scale remotely sensed soil moisture by explicitly representing the sub-pixel variability.
NASA Astrophysics Data System (ADS)
Pegram, Geoff; Bardossy, Andras; Sinclair, Scott
2017-04-01
The use of radar measurements for the space time estimation of precipitation has for many decades been a central topic in hydro-meteorology. In this presentation we are interested specifically in daily and sub-daily extreme values of precipitation at gauged or ungauged locations which are important for design. The purpose of the presentation is to develop a methodology to combine daily precipitation observations and radar measurements to estimate sub-daily extremes at point locations. Radar data corrected using precipitation-reflectivity relationships lead to biased estimations of extremes. Different possibilities of correcting systematic errors using the daily observations are investigated. Observed gauged daily amounts are interpolated to un-sampled points and subsequently disaggregated using the sub-daily values obtained by the radar. Different corrections based on the spatial variability and the sub-daily entropy of scaled rainfall distributions are used to provide unbiased corrections of short duration extremes. In addition, a statistical procedure not based on a matching day by day correction is tested. In this last procedure, as we are only interested in rare extremes, low to medium values of rainfall depth were neglected leaving 12 days of ranked daily maxima in each set per year, whose sum typically comprises about 50% of each annual rainfall total. The sum of these 12 day maxima is first interpolated using a Kriging procedure. Subsequently this sum is disaggregated to daily values using a nearest neighbour procedure. The daily sums are then disaggregated by using the relative values of the biggest 12 radar based days in each year. Of course, the timings of radar and gauge maxima can be different, so the new method presented here uses radar for disaggregating daily gauge totals down to 15 min intervals in order to extract the maxima of sub-hourly through to daily rainfall. The methodologies were tested in South Africa, where an S-band radar operated relatively continuously at Bethlehem from 1998 to 2003, whose scan at 1.5 km above ground [CAPPI] overlapped a dense [10 km spacing] set of 45 pluviometers recording in the same 6-year period. This valuable set of data was obtained from each of 37 selected radar pixels [1 km square in plan] which contained a pluviometer, not masked out by the radar foot-print. The pluviometer data were also aggregated to daily totals, for the same purpose. The extremes obtained using disaggregation methods were compared to the observed extremes in a cross validation procedure. The unusual and novel goal was not to obtain the reproduction of the precipitation matching in space and time, but to obtain frequency distributions of the point extremes, which we found to be stable. Published as: Bárdossy, A., and G. G. S. Pegram (2017) Journal of Hydrology, Volume 544, pp 397-406
NASA Astrophysics Data System (ADS)
Sherwood, Christopher R.; Aretxabaleta, Alfredo L.; Harris, Courtney K.; Rinehimer, J. Paul; Verney, Romaric; Ferré, Bénédicte
2018-05-01
We describe and demonstrate algorithms for treating cohesive and mixed sediment that have been added to the Regional Ocean Modeling System (ROMS version 3.6), as implemented in the Coupled Ocean-Atmosphere-Wave-Sediment Transport Modeling System (COAWST Subversion repository revision 1234). These include the following: floc dynamics (aggregation and disaggregation in the water column); changes in floc characteristics in the seabed; erosion and deposition of cohesive and mixed (combination of cohesive and non-cohesive) sediment; and biodiffusive mixing of bed sediment. These routines supplement existing non-cohesive sediment modules, thereby increasing our ability to model fine-grained and mixed-sediment environments. Additionally, we describe changes to the sediment bed layering scheme that improve the fidelity of the modeled stratigraphic record. Finally, we provide examples of these modules implemented in idealized test cases and a realistic application.
Kitahata, Mari M; Drozd, Daniel R; Crane, Heidi M; Van Rompaey, Stephen E; Althoff, Keri N; Gange, Stephen J; Klein, Marina B; Lucas, Gregory M; Abraham, Alison G; Lo Re, Vincent; McReynolds, Justin; Lober, William B; Mendes, Adell; Modur, Sharada P; Jing, Yuezhou; Morton, Elizabeth J; Griffith, Margaret A; Freeman, Aimee M; Moore, Richard D
2015-01-01
The burden of HIV disease has shifted from traditional AIDS-defining illnesses to serious non-AIDS-defining comorbid conditions. Research aimed at improving HIV-related comorbid disease outcomes requires well-defined, verified clinical endpoints. We developed methods to ascertain and verify end-stage renal disease (ESRD) and end-stage liver disease (ESLD) and validated screening algorithms within the largest HIV cohort collaboration in North America (NA-ACCORD). Individuals who screened positive among all participants in twelve cohorts enrolled between January 1996 and December 2009 underwent medical record review to verify incident ESRD or ESLD using standardized protocols. We randomly sampled 6% of contributing cohorts to determine the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of ESLD and ESRD screening algorithms in a validation subcohort. Among 43,433 patients screened for ESRD, 822 screened positive of which 620 met clinical criteria for ESRD. The algorithm had 100% sensitivity, 99% specificity, 82% PPV, and 100% NPV for ESRD. Among 41,463 patients screened for ESLD, 2,024 screened positive of which 645 met diagnostic criteria for ESLD. The algorithm had 100% sensitivity, 95% specificity, 27% PPV, and 100% NPV for ESLD. Our methods proved robust for ascertainment of ESRD and ESLD in persons infected with HIV.
NASA Astrophysics Data System (ADS)
Franch, B.; Skakun, S.; Vermote, E.; Roger, J. C.
2017-12-01
Surface albedo is an essential parameter not only for developing climate models, but also for most energy balance studies. While climate models are usually applied at coarse resolution, the energy balance studies, which are mainly focused on agricultural applications, require a high spatial resolution. The albedo, estimated through the angular integration of the BRDF, requires an appropriate angular sampling of the surface. However, Sentinel-2A sampling characteristics, with nearly constant observation geometry and low illumination variation, prevent from deriving a surface albedo product. In this work, we apply an algorithm developed to derive a Landsat surface albedo to Sentinel-2A. It is based on the BRDF parameters estimated from the MODerate Resolution Imaging Spectroradiometer (MODIS) CMG surface reflectance product (M{O,Y}D09) using the VJB method (Vermote et al., 2009). Sentinel-2A unsupervised classification images are used to disaggregate the BRDF parameters to the Sentinel-2 spatial resolution. We test the results over five different sites of the US SURFRAD network and plot the results versus albedo field measurements. Additionally, we also test this methodology using Landsat-8 images.
Are human embryos Kantian persons?: Kantian considerations in favor of embryonic stem cell research.
Manninen, Bertha Alvarez
2008-01-31
One argument used by detractors of human embryonic stem cell research (hESCR) invokes Kant's formula of humanity, which proscribes treating persons solely as a means to an end, rather than as ends in themselves. According to Fuat S. Oduncu, for example, adhering to this imperative entails that human embryos should not be disaggregated to obtain pluripotent stem cells for hESCR. Given that human embryos are Kantian persons from the time of their conception, killing them to obtain their cells for research fails to treat them as ends in themselves. This argument assumes two points that are rather contentious given a Kantian framework. First, the argument assumes that when Kant maintains that humanity must be treated as an end in itself, he means to argue that all members of the species Homo sapiens must be treated as ends in themselves; that is, that Kant regards personhood as co-extensive with belonging to the species Homo sapiens. Second, the argument assumes that the event of conception is causally responsible for the genesis of a Kantian person and that, therefore, an embryo is a Kantian person from the time of its conception. In this paper, I will present challenges against these two assumptions by engaging in an exegetical study of some of Kant's works. First, I will illustrate that Kant did not use the term "humanity" to denote a biological species, but rather the capacity to set ends according to reason. Second, I will illustrate that it is difficult given a Kantian framework to denote conception (indeed any biological event) as causally responsible for the creation of a person. Kant ascribed to a dualistic view of human agency, and personhood, according to him, was derived from the supersensible capacity for reason. To argue that a Kantian person is generated due to the event of conception ignores Kant's insistence in various aspects of his work that it is not possible to understand the generation of a person qua a physical operation. Finally, I will end the paper by drawing from Allen Wood's work in Kantian philosophy in order to generate an argument in favor of hESCR.
Are human embryos Kantian persons?: Kantian considerations in favor of embryonic stem cell research
Manninen, Bertha Alvarez
2008-01-01
One argument used by detractors of human embryonic stem cell research (hESCR) invokes Kant's formula of humanity, which proscribes treating persons solely as a means to an end, rather than as ends in themselves. According to Fuat S. Oduncu, for example, adhering to this imperative entails that human embryos should not be disaggregated to obtain pluripotent stem cells for hESCR. Given that human embryos are Kantian persons from the time of their conception, killing them to obtain their cells for research fails to treat them as ends in themselves. This argument assumes two points that are rather contentious given a Kantian framework. First, the argument assumes that when Kant maintains that humanity must be treated as an end in itself, he means to argue that all members of the species Homo sapiens must be treated as ends in themselves; that is, that Kant regards personhood as co-extensive with belonging to the species Homo sapiens. Second, the argument assumes that the event of conception is causally responsible for the genesis of a Kantian person and that, therefore, an embryo is a Kantian person from the time of its conception. In this paper, I will present challenges against these two assumptions by engaging in an exegetical study of some of Kant's works. First, I will illustrate that Kant did not use the term "humanity" to denote a biological species, but rather the capacity to set ends according to reason. Second, I will illustrate that it is difficult given a Kantian framework to denote conception (indeed any biological event) as causally responsible for the creation of a person. Kant ascribed to a dualistic view of human agency, and personhood, according to him, was derived from the supersensible capacity for reason. To argue that a Kantian person is generated due to the event of conception ignores Kant's insistence in various aspects of his work that it is not possible to understand the generation of a person qua a physical operation. Finally, I will end the paper by drawing from Allen Wood's work in Kantian philosophy in order to generate an argument in favor of hESCR. PMID:18237425
Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C
System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less
NASA Technical Reports Server (NTRS)
Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius
1998-01-01
This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.
NASA Astrophysics Data System (ADS)
Das, N. N.; Entekhabi, D.; Dunbar, R. S.; Colliander, A.; Kim, S.; Yueh, S. H.
2017-12-01
NASA's Soil Moisture Active Passive (SMAP) mission was launched on January 31st, 2015. SMAP utilizes an L-band radar and radiometer sharing a rotating 6-meter mesh reflector antenna. However, on July 7th, 2015, the SMAP radar encountered an anomaly and is currently inoperable. During the SMAP post-radar phase, many ways are explored to recover the high-resolution soil moisture capability of the SMAP mission. One of the feasible approaches is to substitute the SMAP radar with other available SAR data. Sentinel 1A/1B SAR data is found more suitable for combining with the SMAP radiometer data because of almost similar orbit configuration that allow overlapping of their swaths with minimal time difference that is key to the SMAP active-passive algorithm. The Sentinel SDV mode acquisition also provide the co-pol and x-pol observations required for the SMAP active-passive algorithm. Some differences do exist between the SMAP SAR data and Sentinel SAR data, they are mainly: 1) Sentinel has C-band SAR and SMAP is L-band; 2) Sentinel has multi incidence angle within its swath, where as SMAP has single incidence angle; and 3) Sentinel swath width is 300 km as compare to SMAP 1000 km swath width. On any given day, the narrow swath width of the Sentinel observations will significantly reduce the spatial coverage of SMAP active-passive approach as compared to the SMAP swath coverage. The temporal resolution (revisit interval) is also degraded from 3-days to 12-days when Sentinel 1A/1B data is used. One bright side of using Sentinel 1A/1B data in the SMAP active-passive algorithm is the potential of obtaining the disaggregated brightness temperature and soil moisture at much finer spatial resolutions of 3 km and 9 km with optimal accuracy. The Beta version of SMAP-Sentinel Active-Passive high-resolution product will be made available to public in September 2017.
Malyugin, Boris E; Shpak, Alexander A; Pokrovskiy, Dmitry F
2015-08-01
To use anterior segment optical coherence tomography (AS-OCT) to evaluate the clinical effectiveness of Implantable Collamer Lens posterior chamber phakic intraocular lens (PC pIOL) sizing based on measurement of the distance from the iris pigment end to the iris pigment end. S. Fyodorov Eye Microsurgery Federal State Institution, Moscow, Russia. Evaluation of diagnostic test or technology. Stage 1 was a prospective study. The sulcus-to-sulcus (STS) distance was measured using ultrasound biomicroscopy (UBM) (Vumax 2), and the distance from iris pigment end to iris pigment end was assessed using a proposed AS-OCT algorithm. Part 2 used retrospective data from patients after implantation of a PC pIOL with the size selected according to AS-OCT (Visante) measurements of the distance from iris pigment end to iris pigment end. The PC pIOL vault was measured by AS-OCT, and adverse events were assessed. Stage 1 comprised 32 eyes of 32 myopic patients (mean age 28.4 years ± 6.3 [SD]; mean spherical equivalent [SE] -13.11 ± 4.28 diopters [D]). Stage 2 comprised 29 eyes of 16 patients (mean age 27.7 ± 4.7 years; mean SE -16.55 ± 3.65 D). The mean STS distance (12.35 ± 0.47 mm) was similar to the mean distance from iris pigment end to iris pigment end distance (examiner 1: 12.36 ± 0.51 mm; examiner 2: 12.37 ± 0.53 mm). The PC pIOL sized using the new AS-OCT algorithm had a mean vault of 0.53 ± 0.18 mm and did not produce adverse events during the 12-month follow-up. In 16 of 29 eyes, the PC pIOL vault was within an optimum interval (0.35 to 0.70 mm). The new measurement algorithm can be effectively used for PC pIOL sizing. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Overlapping and Specific Functions of the Hsp104 N Domain Define Its Role in Protein Disaggregation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jungsoon; Sung, Nuri; Mercado, Jonathan M.
Hsp104 is a ring-forming protein disaggregase that rescues stress-damaged proteins from an aggregated state. To facilitate protein disaggregation, Hsp104 cooperates with Hsp70 and Hsp40 chaperones (Hsp70/40) to form a bi-chaperone system. How Hsp104 recognizes its substrates, particularly the importance of the N domain, remains poorly understood and multiple, seemingly conficting mechanisms have been proposed. Although the N domain is dispensable for protein disaggregation, it is sensitive to point mutations that abolish the function of the bacterial Hsp104 homolog in vitro, and is essential for curing yeast prions by Hsp104 overexpression in vivo. Here, we present the crystal structure of anmore » N-terminal fragment of Saccharomyces cerevisiae Hsp104 with the N domain of one molecule bound to the C-terminal helix of the neighboring D1 domain. Consistent with mimicking substrate interaction, mutating the putative substrate-binding site in a constitutively active Hsp104 variant impairs the recovery of functional protein from aggregates. We fnd that the observed substrate-binding defect can be rescued by Hsp70/40 chaperones, providing a molecular explanation as to why the N domain is dispensable for protein disaggregation when Hsp70/40 is present, yet essential for the dissolution of Hsp104-specifc substrates, such as yeast prions, which likely depends on a direct N domain interaction.« less
Overlapping and Specific Functions of the Hsp104 N Domain Define Its Role in Protein Disaggregation
Lee, Jungsoon; Sung, Nuri; Mercado, Jonathan M.; ...
2017-09-11
Hsp104 is a ring-forming protein disaggregase that rescues stress-damaged proteins from an aggregated state. To facilitate protein disaggregation, Hsp104 cooperates with Hsp70 and Hsp40 chaperones (Hsp70/40) to form a bi-chaperone system. How Hsp104 recognizes its substrates, particularly the importance of the N domain, remains poorly understood and multiple, seemingly conficting mechanisms have been proposed. Although the N domain is dispensable for protein disaggregation, it is sensitive to point mutations that abolish the function of the bacterial Hsp104 homolog in vitro, and is essential for curing yeast prions by Hsp104 overexpression in vivo. Here, we present the crystal structure of anmore » N-terminal fragment of Saccharomyces cerevisiae Hsp104 with the N domain of one molecule bound to the C-terminal helix of the neighboring D1 domain. Consistent with mimicking substrate interaction, mutating the putative substrate-binding site in a constitutively active Hsp104 variant impairs the recovery of functional protein from aggregates. We fnd that the observed substrate-binding defect can be rescued by Hsp70/40 chaperones, providing a molecular explanation as to why the N domain is dispensable for protein disaggregation when Hsp70/40 is present, yet essential for the dissolution of Hsp104-specifc substrates, such as yeast prions, which likely depends on a direct N domain interaction.« less
Express bus-fringe parking planning methodology.
DOT National Transportation Integrated Search
1975-01-01
The conception, calibration, and evaluation of alternative disaggregate behavioral models of the express bus-fringe parking travel choice situation are described. Survey data collected for the Parham Express Service in Richmond, Virginia, are used to...
NASA Astrophysics Data System (ADS)
Milne, Alice E.; Glendining, Margaret J.; Bellamy, Pat; Misselbrook, Tom; Gilhespy, Sarah; Rivas Casado, Monica; Hulin, Adele; van Oijen, Marcel; Whitmore, Andrew P.
2014-01-01
The UK's greenhouse gas inventory for agriculture uses a model based on the IPCC Tier 1 and Tier 2 methods to estimate the emissions of methane and nitrous oxide from agriculture. The inventory calculations are disaggregated at country level (England, Wales, Scotland and Northern Ireland). Before now, no detailed assessment of the uncertainties in the estimates of emissions had been done. We used Monte Carlo simulation to do such an analysis. We collated information on the uncertainties of each of the model inputs. The uncertainties propagate through the model and result in uncertainties in the estimated emissions. Using a sensitivity analysis, we found that in England and Scotland the uncertainty in the emission factor for emissions from N inputs (EF1) affected uncertainty the most, but that in Wales and Northern Ireland, the emission factor for N leaching and runoff (EF5) had greater influence. We showed that if the uncertainty in any one of these emission factors is reduced by 50%, the uncertainty in emissions of nitrous oxide reduces by 10%. The uncertainty in the estimate for the emissions of methane emission factors for enteric fermentation in cows and sheep most affected the uncertainty in methane emissions. When inventories are disaggregated (as that for the UK is) correlation between separate instances of each emission factor will affect the uncertainty in emissions. As more countries move towards inventory models with disaggregation, it is important that the IPCC give firm guidance on this topic.
Le, Tuan-Anh; Amin, Faiz Ul; Kim, Myeong Ok
2017-01-01
The blood–brain barrier (BBB) hinders drug delivery to the brain. Despite various efforts to develop preprogramed actuation schemes for magnetic drug delivery, the unmodeled aggregation phenomenon limits drug delivery performance. This paper proposes a novel scheme with an aggregation model for a feed-forward magnetic actuation design. A simulation platform for aggregated particle delivery is developed and an actuation scheme is proposed to deliver aggregated magnetic nanoparticles (MNPs) using a discontinuous asymmetrical magnetic actuation. The experimental results with a Y-shaped channel indicated the success of the proposed scheme in steering and disaggregation. The delivery performance of the developed scheme was examined using a realistic, three-dimensional (3D) vessel simulation. Furthermore, the proposed scheme enhanced the transport and uptake of MNPs across the BBB in mice. The scheme presented here facilitates the passage of particles across the BBB to the brain using an electromagnetic actuation scheme. PMID:29271927
Bilateral Trade Flows and Income Distribution Similarity.
Martínez-Zarzoso, Inmaculada; Vollmer, Sebastian
2016-01-01
Current models of bilateral trade neglect the effects of income distribution. This paper addresses the issue by accounting for non-homothetic consumer preferences and hence investigating the role of income distribution in the context of the gravity model of trade. A theoretically justified gravity model is estimated for disaggregated trade data (Dollar volume is used as dependent variable) using a sample of 104 exporters and 108 importers for 1980-2003 to achieve two main goals. We define and calculate new measures of income distribution similarity and empirically confirm that greater similarity of income distribution between countries implies more trade. Using distribution-based measures as a proxy for demand similarities in gravity models, we find consistent and robust support for the hypothesis that countries with more similar income-distributions trade more with each other. The hypothesis is also confirmed at disaggregated level for differentiated product categories.
Bilateral Trade Flows and Income Distribution Similarity
2016-01-01
Current models of bilateral trade neglect the effects of income distribution. This paper addresses the issue by accounting for non-homothetic consumer preferences and hence investigating the role of income distribution in the context of the gravity model of trade. A theoretically justified gravity model is estimated for disaggregated trade data (Dollar volume is used as dependent variable) using a sample of 104 exporters and 108 importers for 1980–2003 to achieve two main goals. We define and calculate new measures of income distribution similarity and empirically confirm that greater similarity of income distribution between countries implies more trade. Using distribution-based measures as a proxy for demand similarities in gravity models, we find consistent and robust support for the hypothesis that countries with more similar income-distributions trade more with each other. The hypothesis is also confirmed at disaggregated level for differentiated product categories. PMID:27137462
Hoshiar, Ali Kafash; Le, Tuan-Anh; Amin, Faiz Ul; Kim, Myeong Ok; Yoon, Jungwon
2017-12-22
The blood-brain barrier (BBB) hinders drug delivery to the brain. Despite various efforts to develop preprogramed actuation schemes for magnetic drug delivery, the unmodeled aggregation phenomenon limits drug delivery performance. This paper proposes a novel scheme with an aggregation model for a feed-forward magnetic actuation design. A simulation platform for aggregated particle delivery is developed and an actuation scheme is proposed to deliver aggregated magnetic nanoparticles (MNPs) using a discontinuous asymmetrical magnetic actuation. The experimental results with a Y-shaped channel indicated the success of the proposed scheme in steering and disaggregation. The delivery performance of the developed scheme was examined using a realistic, three-dimensional (3D) vessel simulation. Furthermore, the proposed scheme enhanced the transport and uptake of MNPs across the BBB in mice. The scheme presented here facilitates the passage of particles across the BBB to the brain using an electromagnetic actuation scheme.
P-Finder: Reconstruction of Signaling Networks from Protein-Protein Interactions and GO Annotations.
Young-Rae Cho; Yanan Xin; Speegle, Greg
2015-01-01
Because most complex genetic diseases are caused by defects of cell signaling, illuminating a signaling cascade is essential for understanding their mechanisms. We present three novel computational algorithms to reconstruct signaling networks between a starting protein and an ending protein using genome-wide protein-protein interaction (PPI) networks and gene ontology (GO) annotation data. A signaling network is represented as a directed acyclic graph in a merged form of multiple linear pathways. An advanced semantic similarity metric is applied for weighting PPIs as the preprocessing of all three methods. The first algorithm repeatedly extends the list of nodes based on path frequency towards an ending protein. The second algorithm repeatedly appends edges based on the occurrence of network motifs which indicate the link patterns more frequently appearing in a PPI network than in a random graph. The last algorithm uses the information propagation technique which iteratively updates edge orientations based on the path strength and merges the selected directed edges. Our experimental results demonstrate that the proposed algorithms achieve higher accuracy than previous methods when they are tested on well-studied pathways of S. cerevisiae. Furthermore, we introduce an interactive web application tool, called P-Finder, to visualize reconstructed signaling networks.
Rømer Thomsen, Kristine; Callesen, Mette Buhl; Feldstein Ewing, Sarah W
2017-09-01
Cannabis use represents a major public health issue throughout the globe. Yet, we still lack the most fundamental knowledge on long-term effects of cannabis on neural, cognitive, and behavioral function. Part of this stems from how cannabis has been measured historically. To this end, most empirical examinations of cannabis have consolidated all types of cannabis collectively. However, this approach obscures differences in how cannabinoids operate. In this commentary, we address the contrasting properties of tetrahydrocannabinol (THC) and cannabidiol (CBD) and their opposing effects on cognitive function. In addition, we address the increase in cannabis potency throughout the past two decades and how that impacts generalizability of early data to evaluations of contemporary public health. We underscore the urgent need for future research to disaggregate examination of THC from CBD, along with the importance of measuring cannabis potency to more effectively unravel its influence on cognitive function and other health issues. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chen, Yunjie; Roux, Benoît
2014-09-21
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.
NASA Astrophysics Data System (ADS)
Chen, Yunjie; Roux, Benoît
2014-09-01
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.
NASA Astrophysics Data System (ADS)
Mallick, Rajnish; Ganguli, Ranjan; Kumar, Ravi
2017-05-01
The optimized design of a smart post-buckled beam actuator (PBA) is performed in this study. A smart material based piezoceramic stack actuator is used as a prime-mover to drive the buckled beam actuator. Piezoceramic actuators are high force, small displacement devices; they possess high energy density and have high bandwidth. In this study, bench top experiments are conducted to investigate the angular tip deflections due to the PBA. A new design of a linear-to-linear motion amplification device (LX-4) is developed to circumvent the small displacement handicap of piezoceramic stack actuators. LX-4 enhances the piezoceramic actuator mechanical leverage by a factor of four. The PBA model is based on dynamic elastic stability and is analyzed using the Mathieu-Hill equation. A formal optimization is carried out using a newly developed meta-heuristic nature inspired algorithm, named as the bat algorithm (BA). The BA utilizes the echolocation capability of bats. An optimized PBA in conjunction with LX-4 generates end rotations of the order of 15° at the output end. The optimized PBA design incurs less weight and induces large end rotations, which will be useful in development of various mechanical and aerospace devices, such as helicopter trailing edge flaps, micro and nano aerial vehicles and other robotic systems.
Phillips, Susan P.; Hammarström, Anne
2011-01-01
Introduction Limited existing research on gender inequities suggests that for men workplace atmosphere shapes wellbeing while women are less susceptible to socioeconomic or work status but vulnerable to home inequities. Methods Using the 2007 Northern Swedish Cohort (n = 773) we identified relative contributions of perceived gender inequities in relationships, financial strain, and education to self-reported health to determine whether controlling for sex, examining interactions between sex and other social variables, or sex-disaggregating data yielded most information about sex differences. Results and Discussion Men had lower education but also less financial strain, and experienced less gender inequity. Overall, low education and financial strain detracted from health. However, sex-disaggregated data showed this to be true for women, whereas for men only gender inequity at home affected health. In the relatively egalitarian Swedish environment where women more readily enter all work arenas and men often provide parenting, traditional primacy of the home environment (for women) and the work environment (for men) in shaping health is reversing such that perceived domestic gender inequity has a significant health impact on men, while for women only education and financial strain are contributory. These outcomes were identified only when data were sex-disaggregated. PMID:21747922
Torok, Michelle; Darke, Shane; Shand, Fiona; Kaye, Sharlene
2016-09-01
Violence is a major burden of harm among injecting drug users (IDU), however, the liability to violent offending is not well understood. The current study aimed to better understand differences in the liability to violence by determining whether IDU could be disaggregated into distinct violent offending classes, and determining the correlates of class membership. A total of 300 IDU from Sydney, Australia were administered a structured interview examining the prevalence and severity of drug use and violent offending histories, as well as early life risk factors (maltreatment, childhood mental disorder, trait personality). IDU were disaggregated into four distinct latent classes, comprising a non-violent class (24%), an adolescent-onset persistent class (33%), an adult-onset transient class (24%) and an early-onset, chronic class (19%). Pairwise and group comparisons of classes on predispositional and substance use risks showed that the EARLY class had the poorest psychosocial risk profile, while the NON class had the most favourable. Multinomial logistic regression revealed that higher trait impulsivity and aggression scores, having a history of conduct disorder, frequent childhood abuse, and more problematic alcohol use, were independently associated with more temporally stable and severe violent offending. The model explained 67% of variance in class membership (χ(2)=207.7, df=51, p<0.001). IDU can be meaningfully disaggregated into distinct violent offending classes using developmental criteria. The age of onset of violence was indicative of class membership insomuch as that the extent of early life risk exposure was differentially associated with greater long-term liability to violence and drug use. Copyright © 2016 Elsevier Ltd. All rights reserved.
An Emergency Packet Forwarding Scheme for V2V Communication Networks
2014-01-01
This paper proposes an effective warning message forwarding scheme for cooperative collision avoidance. In an emergency situation, an emergency-detecting vehicle warns the neighbor vehicles via an emergency warning message. Since the transmission range is limited, the warning message is broadcast in a multihop manner. Broadcast packets lead two challenges to forward the warning message in the vehicular network: redundancy of warning messages and competition with nonemergency transmissions. In this paper, we study and address the two major challenges to achieve low latency in delivery of the warning message. To reduce the intervehicle latency and end-to-end latency, which cause chain collisions, we propose a two-way intelligent broadcasting method with an adaptable distance-dependent backoff algorithm. Considering locations of vehicles, the proposed algorithm controls the broadcast of a warning message to reduce redundant EWM messages and adaptively chooses the contention window to compete with nonemergency transmission. Via simulations, we show that our proposed algorithm reduces the probability of rear-end crashes by 70% compared to previous algorithms by reducing the intervehicle delay. We also show that the end-to-end propagation delay of the warning message is reduced by 55%. PMID:25054181
[Medical computer-aided detection method based on deep learning].
Tao, Pan; Fu, Zhongliang; Zhu, Kai; Wang, Lili
2018-03-01
This paper performs a comprehensive study on the computer-aided detection for the medical diagnosis with deep learning. Based on the region convolution neural network and the prior knowledge of target, this algorithm uses the region proposal network, the region of interest pooling strategy, introduces the multi-task loss function: classification loss, bounding box localization loss and object rotation loss, and optimizes it by end-to-end. For medical image it locates the target automatically, and provides the localization result for the next stage task of segmentation. For the detection of left ventricular in echocardiography, proposed additional landmarks such as mitral annulus, endocardial pad and apical position, were used to estimate the left ventricular posture effectively. In order to verify the robustness and effectiveness of the algorithm, the experimental data of ultrasonic and nuclear magnetic resonance images are selected. Experimental results show that the algorithm is fast, accurate and effective.
Analysis of Online DBA Algorithm with Adaptive Sleep Cycle in WDM EPON
NASA Astrophysics Data System (ADS)
Pajčin, Bojan; Matavulj, Petar; Radivojević, Mirjana
2018-05-01
In order to manage Quality of Service (QoS) and energy efficiency in the optical access network, an online Dynamic Bandwidth Allocation (DBA) algorithm with adaptive sleep cycle is presented. This DBA algorithm has the ability to allocate an additional bandwidth to the end user within a single sleep cycle whose duration changes depending on the current buffers occupancy. The purpose of this DBA algorithm is to tune the duration of the sleep cycle depending on the network load in order to provide service to the end user without violating strict QoS requests in all network operating conditions.
Grimm, W; Menz, V; Hoffmann, J; Maisch, B
1998-04-01
Unnecessary shocks by ICDs for rhythms other than sustained VT or VF have been described as the most frequent adverse event in ICD patients. To avoid unnecessary shocks for self-terminating arrhythmias, the third-generation Jewel PCD defibrillators 7202, 7219, and 7220 Plus use a specially designed VF confirmation algorithm after charge end. The purpose of this study was to determine the ability of this VF confirmation algorithm to recognize nonsustained VT, and to analyze the reasons for failure of the PCD device to abort shock therapy for nonsustained VT despite use of this VF confirmation algorithm. Analysis of stored electrograms of electrical events triggering high voltage capacitor charging in the programmed VF zone of the device showed 36 spontaneous episodes of nonsustained VT (227 +/- 21 beats/min) during 18 +/- 7 months follow-up in 15 patients who had a Jewel PCD implanted at our hospital. Intracardiac electrogram recordings and simultaneously retrieved marker channels demonstrated that the ICD shock was appropriately aborted according to the VF confirmation algorithm in 24 (67%) of 36 episodes of nonsustained VT. Twelve episodes (33%) of nonsustained VT, however, were followed by spontaneous ICD shock in 6 (40%) of the 15 study patients. The only reason for all 12 shocks for sustained VT was the inability of the device to recognize the absence of VT after charge end due to shortcomings of the VF confirmation algorithm: 11 of the 12 shocks for nonsustained VT were triggered by the occurrence of paced beats during the VF confirmation period and 1 shock for nonsustained VT was triggered by the occurrence of 2 premature beats after charge end. Thus, better VF confirmation algorithms need to be incorporated in future PCD devices to avoid unnecessary shocks for nonsustained VT.
NASA Astrophysics Data System (ADS)
Wang, W.; Lee, C.; Cochran, K. K.; Armstrong, R. A.
2016-02-01
Sinking particles play a pivotal role transferring material from the surface to the deeper ocean via the "biological pump". To quantify the extent to which these particles aggregate and disaggregate, and thus affect particle settling velocity, we constructed a box model to describe organic matter cycling. The box model was fit to chloropigment data sampled in the 2005 MedFlux project using Indented Rotating Sphere sediment traps operating in Settling Velocity (SV) mode. Because of the very different pigment compositions of phytoplankton and fecal pellets, chloropigments are useful as proxies to record particle exchange. The maximum likelihood statistical method was used to estimate particle aggregation, disaggregation, and organic matter remineralization rate constants. Eleven settling velocity categories collected by SV sediment traps were grouped into two sinking velocity classes (fast- and slow-sinking) to decrease the number of parameters that needed to be estimated. Organic matter degradation rate constants were estimated to be 1.2, 1.6, and 1.1 y^-1, which are equivalent to degradation half-lives of 0.60, 0.45, and 0.62 y^-1, at 313, 524, and 1918 m, respectively. Rate constants of chlorophyll a degradation to pheopigments (pheophorbide, pheophytin, and pyropheophorbide) were estimated to be 0.88, 0.93, and 1.2 y^-1, at 313, 524, and 1918 m, respectively. Aggregation rate constants varied little with depth, with the highest value being 0.07 y^-1 at 524 m. Disaggregation rate constants were highest at 524 m (14 y^-1) and lowest at 1918 m (9.6 y^-1)
DOT National Transportation Integrated Search
2014-05-01
The U.S. Environmental Protection Agencys (EPA) newest emissions model, MOtor Vehicle : Emission Simulator (MOVES), uses a disaggregate approach that enables the users of the model to create : and use local drive schedules (drive cycles) in order ...
Simulation of APEX data: the SENSOR approach
NASA Astrophysics Data System (ADS)
Boerner, Anko; Schaepman, Michael E.; Schlaepfer, Daniel; Wiest, Lorenz; Reulke, Ralf
1999-10-01
The consistent simulation of airborne and spaceborne hyperspectral data is an important task and sometimes the only way for the adaptation and optimization of a sensor and its observing conditions, the choice and test of algorithms for data processing, error estimations and the evaluation of the capabilities of the whole sensor system. The integration of three approaches is suggested for the data simulation of APEX (Airborne Prism Experiment): (1) a spectrally consistent approach (e.g. using AVIRIS data), (2) a geometrically consistent approach (e.g. using CASI data), and (3) an end-to- end simulation of the sensor system. In this paper, the last approach is discussed in detail. Such a technique should be used if there is no simple deterministic relation between input and output parameters. The simulation environment SENSOR (Software Environment for the Simulation of Optical Remote Sensing Systems) presented here includes a full model of the sensor system, the observed object and the atmosphere. The simulator consists of three parts. The first part describes the geometrical relations between object, sun, and sensor using a ray tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor-radiance using a pre-calculated multidimensional lookup-table for the atmospheric boundary conditions and bi- directional reflectances. Part three consists of an optical and an electronic sensor model for the generation of digital images. Application-specific algorithms for data processing must be considered additionally. The benefit of using an end- to-end simulation approach is demonstrated, an example of a simulated APEX data cube is given, and preliminary steps of evaluation of SENSOR are carried out.
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen
2013-01-01
One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.
Shepherd, Anita; Yan, Xiaoyuan; Nayak, Dali; Newbold, Jamie; Moran, Dominic; Dhanoa, Mewa Singh; Goulding, Keith; Smith, Pete; Cardenas, Laura M.
2015-01-01
China accounts for a third of global nitrogen fertilizer consumption. Under an International Panel on Climate Change (IPCC) Tier 2 assessment, emission factors (EFs) are developed for the major crop types using country-specific data. IPCC advises a separate calculation for the direct nitrous oxide (N2O) emissions of rice cultivation from that of cropland and the consideration of the water regime used for irrigation. In this paper we combine these requirements in two independent analyses, using different data quality acceptance thresholds, to determine the influential parameters on emissions with which to disaggregate and create N2O EFs. Across China, the N2O EF for lowland horticulture was slightly higher (between 0.74% and 1.26% of fertilizer applied) than that for upland crops (values ranging between 0.40% and 1.54%), and significantly higher than for rice (values ranging between 0.29% and 0.66% on temporarily drained soils, and between 0.15% and 0.37% on un-drained soils). Higher EFs for rice were associated with longer periods of drained soil and the use of compound fertilizer; lower emissions were associated with the use of urea or acid soils. Higher EFs for upland crops were associated with clay soil, compound fertilizer or maize crops; lower EFs were associated with sandy soil and the use of urea. Variation in emissions for lowland vegetable crops was closely associated with crop type. The two independent analyses in this study produced consistent disaggregated N2O EFs for rice and mixed crops, showing that the use of influential cropping parameters can produce robust EFs for China. PMID:26865831
NASA Astrophysics Data System (ADS)
Shepherd, Anita; Yan, Xiaoyuan; Nayak, Dali; Newbold, Jamie; Moran, Dominic; Dhanoa, Mewa Singh; Goulding, Keith; Smith, Pete; Cardenas, Laura M.
2015-12-01
China accounts for a third of global nitrogen fertilizer consumption. Under an International Panel on Climate Change (IPCC) Tier 2 assessment, emission factors (EFs) are developed for the major crop types using country-specific data. IPCC advises a separate calculation for the direct nitrous oxide (N2O) emissions of rice cultivation from that of cropland and the consideration of the water regime used for irrigation. In this paper we combine these requirements in two independent analyses, using different data quality acceptance thresholds, to determine the influential parameters on emissions with which to disaggregate and create N2O EFs. Across China, the N2O EF for lowland horticulture was slightly higher (between 0.74% and 1.26% of fertilizer applied) than that for upland crops (values ranging between 0.40% and 1.54%), and significantly higher than for rice (values ranging between 0.29% and 0.66% on temporarily drained soils, and between 0.15% and 0.37% on un-drained soils). Higher EFs for rice were associated with longer periods of drained soil and the use of compound fertilizer; lower emissions were associated with the use of urea or acid soils. Higher EFs for upland crops were associated with clay soil, compound fertilizer or maize crops; lower EFs were associated with sandy soil and the use of urea. Variation in emissions for lowland vegetable crops was closely associated with crop type. The two independent analyses in this study produced consistent disaggregated N2O EFs for rice and mixed crops, showing that the use of influential cropping parameters can produce robust EFs for China.
Scaled Runge-Kutta algorithms for handling dense output
NASA Technical Reports Server (NTRS)
Horn, M. K.
1981-01-01
Low order Runge-Kutta algorithms are developed which determine the solution of a system of ordinary differential equations at any point within a given integration step, as well as at the end of each step. The scaled Runge-Kutta methods are designed to be used with existing Runge-Kutta formulas, using the derivative evaluations of these defining algorithms as the core of the system. For a slight increase in computing time, the solution may be generated within the integration step, improving the efficiency of the Runge-Kutta algorithms, since the step length need no longer be severely reduced to coincide with the desired output point. Scaled Runge-Kutta algorithms are presented for orders 3 through 5, along with accuracy comparisons between the defining algorithms and their scaled versions for a test problem.
Terzenidis, Nikos; Moralis-Pegios, Miltiadis; Mourgias-Alexandris, George; Vyrsokinos, Konstantinos; Pleros, Nikos
2018-04-02
Departing from traditional server-centric data center architectures towards disaggregated systems that can offer increased resource utilization at reduced cost and energy envelopes, the use of high-port switching with highly stringent latency and bandwidth requirements becomes a necessity. We present an optical switch architecture exploiting a hybrid broadcast-and-select/wavelength routing scheme with small-scale optical feedforward buffering. The architecture is experimentally demonstrated at 10Gb/s, reporting error-free performance with a power penalty of <2.5dB. Moreover, network simulations for a 256-node system, revealed low-latency values of only 605nsec, at throughput values reaching 80% when employing 2-packet-size optical buffers, while multi-rack network performance was also investigated.
Banakh, V A; Marakasov, D A
2007-08-01
Reconstruction of a wind profile based on the statistics of plane-wave intensity fluctuations in a turbulent atmosphere is considered. The algorithm for wind profile retrieval from the spatiotemporal spectrum of plane-wave weak intensity fluctuations is described, and the results of end-to-end computer experiments on wind profiling based on the developed algorithm are presented. It is shown that the reconstructing algorithm allows retrieval of a wind profile from turbulent plane-wave intensity fluctuations with acceptable accuracy.
Bermejo, Javier; Yotti, Raquel; Pérez del Villar, Candelas; del Álamo, Juan C; Rodríguez-Pérez, Daniel; Martínez-Legazpi, Pablo; Benito, Yolanda; Antoranz, J Carlos; Desco, M Mar; González-Mansilla, Ana; Barrio, Alicia; Elízaga, Jaime; Fernández-Avilés, Francisco
2013-08-15
In cardiovascular research, relaxation and stiffness are calculated from pressure-volume (PV) curves by separately fitting the data during the isovolumic and end-diastolic phases (end-diastolic PV relationship), respectively. This method is limited because it assumes uncoupled active and passive properties during these phases, it penalizes statistical power, and it cannot account for elastic restoring forces. We aimed to improve this analysis by implementing a method based on global optimization of all PV diastolic data. In 1,000 Monte Carlo experiments, the optimization algorithm recovered entered parameters of diastolic properties below and above the equilibrium volume (intraclass correlation coefficients = 0.99). Inotropic modulation experiments in 26 pigs modified passive pressure generated by restoring forces due to changes in the operative and/or equilibrium volumes. Volume overload and coronary microembolization caused incomplete relaxation at end diastole (active pressure > 0.5 mmHg), rendering the end-diastolic PV relationship method ill-posed. In 28 patients undergoing PV cardiac catheterization, the new algorithm reduced the confidence intervals of stiffness parameters by one-fifth. The Jacobian matrix allowed visualizing the contribution of each property to instantaneous diastolic pressure on a per-patient basis. The algorithm allowed estimating stiffness from single-beat PV data (derivative of left ventricular pressure with respect to volume at end-diastolic volume intraclass correlation coefficient = 0.65, error = 0.07 ± 0.24 mmHg/ml). Thus, in clinical and preclinical research, global optimization algorithms provide the most complete, accurate, and reproducible assessment of global left ventricular diastolic chamber properties from PV data. Using global optimization, we were able to fully uncouple relaxation and passive PV curves for the first time in the intact heart.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hui, C; Suh, Y; Robertson, D
Purpose: To develop a novel algorithm to generate internal respiratory signals for sorting of four-dimensional (4D) computed tomography (CT) images. Methods: The proposed algorithm extracted multiple time resolved features as potential respiratory signals. These features were taken from the 4D CT images and its Fourier transformed space. Several low-frequency locations in the Fourier space and selected anatomical features from the images were used as potential respiratory signals. A clustering algorithm was then used to search for the group of appropriate potential respiratory signals. The chosen signals were then normalized and averaged to form the final internal respiratory signal. Performance ofmore » the algorithm was tested in 50 4D CT data sets and results were compared with external signals from the real-time position management (RPM) system. Results: In almost all cases, the proposed algorithm generated internal respiratory signals that visibly matched the external respiratory signals from the RPM system. On average, the end inspiration times calculated by the proposed algorithm were within 0.1 s of those given by the RPM system. Less than 3% of the calculated end inspiration times were more than one time frame away from those given by the RPM system. In 3 out of the 50 cases, the proposed algorithm generated internal respiratory signals that were significantly smoother than the RPM signals. In these cases, images sorted using the internal respiratory signals showed fewer artifacts in locations corresponding to the discrepancy in the internal and external respiratory signals. Conclusion: We developed a robust algorithm that generates internal respiratory signals from 4D CT images. In some cases, it even showed the potential to outperform the RPM system. The proposed algorithm is completely automatic and generally takes less than 2 min to process. It can be easily implemented into the clinic and can potentially replace the use of external surrogates.« less
Roberts, Greg; Bryant, Diane
2012-01-01
This study used data from the Early Childhood Longitudinal Survey, Kindergarten Class of 1998 –1999, to (a) estimate mathematics achievement trends through 5th grade in the population of students who are English-language proficient by the end of kindergarten, (b) compare trends across primary language groups within this English-language proficient group, (c) evaluate the effect of low socioeconomic status (SES) for English-language proficient students and within different primary language groups, and (d) estimate language-group trends in specific mathematics skill areas. The group of English-language proficient English-language learners (ELLs) was disaggregated into native Spanish speakers and native speakers of Asian languages, the 2 most prevalent groups of ELLs in the United States. Results of multilevel latent variable growth modeling suggest that primary language may be less salient than SES in explaining the mathematics achievement of English-language proficient ELLs. The study also found that mathematics-related school readiness is a key factor in explaining subsequent achievement differences and that the readiness gap is prevalent across the range of mathematics-related skills. PMID:21574702
The Suite for Embedded Applications and Kernels
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-05-10
Many applications of high performance embedded computing are limited by performance or power bottlenecks. We havedesigned SEAK, a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions to these bottlenecks? and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) andgoal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user blackbox evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informativemore » for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less
Komatsu, Toshiya; Aida, Yoshitomi; Fukuda, Takao; Sanui, Terukazu; Hiratsuka, Shunji; Pabst, Michael J; Nishimura, Fusanori
2016-04-01
We studied the interaction of LPS with albumin, hemoglobin or high-density lipoprotein (HDL), and whether the interaction affected the activity of LPS on neutrophils. These proteins disaggregated LPS, depending upon temperature and LPS:protein ratio. Albumin-treated LPS was absorbed by immobilized anti-albumin antibody and was eluted with Triton X-100, indicating that LPS formed a hydrophobic complex with albumin. Rd mutant LPS was not disaggregated by the proteins, and did not form a complex with the proteins. But triethylamine-treated Rd mutant LPS formed complexes. When LPS was incubated with an equal concentration of albumin and with polymyxin B (PMXB), PMXB-LPS-protein three-way complexes were formed. After removal of PMXB, the complexes consisted of 11-15 LPS monomers bound to one albumin or hemoglobin molecule. LPS primed neutrophils for enhanced release of formyl peptide-stimulated superoxide, in a serum- and LPS-binding protein (LBP)-dependent manner. Although LPS plus LBP alone did not prime neutrophils, albumin-, hemoglobin- or HDL-treated LPS primed neutrophils when added with LBP. Triethylamine-treated Rd mutant LPS primed neutrophils only when incubated with one of the proteins and with LBP. Thus, in addition to LBP, disaggregation and complex formation of LPS with one of these proteins is required for LPS to prime neutrophils. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Reducing equifinality of hydrological models by integrating Functional Streamflow Disaggregation
NASA Astrophysics Data System (ADS)
Lüdtke, Stefan; Apel, Heiko; Nied, Manuela; Carl, Peter; Merz, Bruno
2014-05-01
A universal problem of the calibration of hydrological models is the equifinality of different parameter sets derived from the calibration of models against total runoff values. This is an intrinsic problem stemming from the quality of the calibration data and the simplified process representation by the model. However, discharge data contains additional information which can be extracted by signal processing methods. An analysis specifically developed for the disaggregation of runoff time series into flow components is the Functional Streamflow Disaggregation (FSD; Carl & Behrendt, 2008). This method is used in the calibration of an implementation of the hydrological model SWIM in a medium sized watershed in Thailand. FSD is applied to disaggregate the discharge time series into three flow components which are interpreted as base flow, inter-flow and surface runoff. In addition to total runoff, the model is calibrated against these three components in a modified GLUE analysis, with the aim to identify structural model deficiencies, assess the internal process representation and to tackle equifinality. We developed a model dependent (MDA) approach calibrating the model runoff components against the FSD components, and a model independent (MIA) approach comparing the FSD of the model results and the FSD of calibration data. The results indicate, that the decomposition provides valuable information for the calibration. Particularly MDA highlights and discards a number of standard GLUE behavioural models underestimating the contribution of soil water to river discharge. Both, MDA and MIA yield to a reduction of the parameter ranges by a factor up to 3 in comparison to standard GLUE. Based on these results, we conclude that the developed calibration approach is able to reduce the equifinality of hydrological model parameterizations. The effect on the uncertainty of the model predictions is strongest by applying MDA and shows only minor reductions for MIA. Besides further validation of FSD, the next steps include an extension of the study to different catchments and other hydrological models with a similar structure.
NASA Astrophysics Data System (ADS)
Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.
2016-05-01
The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.
Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS
NASA Astrophysics Data System (ADS)
Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.
Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.
Information theoretic analysis of edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2010-08-01
Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.
The ALMA Science Pipeline: Current Status
NASA Astrophysics Data System (ADS)
Humphreys, Elizabeth; Miura, Rie; Brogan, Crystal L.; Hibbard, John; Hunter, Todd R.; Indebetouw, Remy
2016-09-01
The ALMA Science Pipeline is being developed for the automated calibration and imaging of ALMA interferometric and single-dish data. The calibration Pipeline for interferometric data was accepted for use by ALMA Science Operations in 2014, and for single-dish data end-to-end processing in 2015. However, work is ongoing to expand the use cases for which the Pipeline can be used e.g. for higher frequency and lower signal-to-noise datasets, and for new observing modes. A current focus includes the commissioning of science target imaging for interferometric data. For the Single Dish Pipeline, the line finding algorithm used in baseline subtraction and baseline flagging heuristics have been greately improved since the prototype used for data from the previous cycle. These algorithms, unique to the Pipeline, produce better results than standard manual processing in many cases. In this poster, we report on the current status of the Pipeline capabilities, present initial results from the Imaging Pipeline, and the smart line finding and flagging algorithm used in the Single Dish Pipeline. The Pipeline is released as part of CASA (the Common Astronomy Software Applications package).
NASA Astrophysics Data System (ADS)
Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph
2018-07-01
To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also slightly outperformed the other MRC models with respect to the intensity-frequency relationship. To assess the performance of the coupled Poisson rectangular pulse and constrained cascade model, precipitation events were stochastically generated by the Poisson rectangular pulse model and then disaggregated by the constrained cascade model. We found that the coupled model performs satisfactorily in terms of the temporal pattern of the precipitation time series, event characteristics and the intensity-frequency relationship.
DISAGGREGATION OF GOES LAND SURFACE TEMPERATURES USING SURFACE EMISSIVITY
USDA-ARS?s Scientific Manuscript database
Accurate temporal and spatial estimation of land surface temperatures (LST) is important for modeling the hydrological cycle at field to global scales because LSTs can improve estimates of soil moisture and evapotranspiration. Using remote sensing satellites, accurate LSTs could be routine, but unfo...
Efficient audio signal processing for embedded systems
NASA Astrophysics Data System (ADS)
Chiu, Leung Kin
As mobile platforms continue to pack on more computational power, electronics manufacturers start to differentiate their products by enhancing the audio features. However, consumers also demand smaller devices that could operate for longer time, hence imposing design constraints. In this research, we investigate two design strategies that would allow us to efficiently process audio signals on embedded systems such as mobile phones and portable electronics. In the first strategy, we exploit properties of the human auditory system to process audio signals. We designed a sound enhancement algorithm to make piezoelectric loudspeakers sound ”richer" and "fuller." Piezoelectric speakers have a small form factor but exhibit poor response in the low-frequency region. In the algorithm, we combine psychoacoustic bass extension and dynamic range compression to improve the perceived bass coming out from the tiny speakers. We also developed an audio energy reduction algorithm for loudspeaker power management. The perceptually transparent algorithm extends the battery life of mobile devices and prevents thermal damage in speakers. This method is similar to audio compression algorithms, which encode audio signals in such a ways that the compression artifacts are not easily perceivable. Instead of reducing the storage space, however, we suppress the audio contents that are below the hearing threshold, therefore reducing the signal energy. In the second strategy, we use low-power analog circuits to process the signal before digitizing it. We designed an analog front-end for sound detection and implemented it on a field programmable analog array (FPAA). The system is an example of an analog-to-information converter. The sound classifier front-end can be used in a wide range of applications because programmable floating-gate transistors are employed to store classifier weights. Moreover, we incorporated a feature selection algorithm to simplify the analog front-end. A machine learning algorithm AdaBoost is used to select the most relevant features for a particular sound detection application. In this classifier architecture, we combine simple "base" analog classifiers to form a strong one. We also designed the circuits to implement the AdaBoost-based analog classifier.
Quantifying and Disaggregating Consumer Purchasing Behavior for Energy Systems Modeling
Consumer behaviors such as energy conservation, adoption of more efficient technologies, and fuel switching represent significant potential for greenhouse gas mitigation. Current efforts to model future energy outcomes have tended to use simplified economic assumptions ...
Digital Libraries: Situating Use in Changing Information Infrastructure.
ERIC Educational Resources Information Center
Bishop, Ann Peterson; Neumann, Laura J.; Star, Susan Leigh; Merkel, Cecelia; Ignacio, Emily; Sandusky, Robert J.
2000-01-01
Reviews empirical studies about how digital libraries evolve for use in scientific and technical work based on the Digital Libraries Initiative (DLI) at the University of Illinois. Discusses how users meet infrastructure and document disaggregation; describes use of the DLI testbed of full text journal articles; and explains research methodology.…
Local sensory control of a dexterous end effector
NASA Technical Reports Server (NTRS)
Pinto, Victor H.; Everett, Louis J.; Driels, Morris
1990-01-01
A numerical scheme was developed to solve the inverse kinematics for a user-defined manipulator. The scheme was based on a nonlinear least-squares technique which determines the joint variables by minimizing the difference between the target end effector pose and the actual end effector pose. The scheme was adapted to a dexterous hand in which the joints are either prismatic or revolute and the fingers are considered open kinematic chains. Feasible solutions were obtained using a three-fingered dexterous hand. An algorithm to estimate the position and orientation of a pre-grasped object was also developed. The algorithm was based on triangulation using an ideal sensor and a spherical object model. By choosing the object to be a sphere, only the position of the object frame was important. Based on these simplifications, a minimum of three sensors are needed to find the position of a sphere. A two dimensional example to determine the position of a circle coordinate frame using a two-fingered dexterous hand was presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-04-01
A transportation policy analysis methodology described in Guidelines for Travel Demand Analyses of Program Measures to Promote Carpools, Vanpools, and Public Transportation, November, 1976 (EAPA 4:1921) is demonstrated. The results reported build upon the two levels of analysis capabilities (a fully calibrated and operational computer package based on a set of disaggregate travel demand models that were estimated on a random sample of urban travelers and a manual procedure or sketch planning pivot-point version of the above methodology) and have undertaken to accomplish the following objectives: transferability, testing the manual approach on actual applications, and validating the method. The firstmore » objective was investigated by examining and comparing disaggregate models that were estimated in 7 US cities by eight different organizations. The next two objectives were investigated using separate case studies: the Washington, DC, Shirley Highway preferential transit and carpool lanes; the Portland, Oregon, Banfield Highway Expressway preferential transit and carpool lanes; the Los Angeles, Santa Monica Freeway preferential Diamond Lane and ramp metering facilities for transit and carpools; the Minneapolis, express bus on metered freeway project; and the Portland, Oregon, carpool matching and promotion programs for the general public and for employer-based groups. Principal findings are summarized and results consolidated. (MCW)« less
Orion MPCV GN and C End-to-End Phasing Tests
NASA Technical Reports Server (NTRS)
Neumann, Brian C.
2013-01-01
End-to-end integration tests are critical risk reduction efforts for any complex vehicle. Phasing tests are an end-to-end integrated test that validates system directional phasing (polarity) from sensor measurement through software algorithms to end effector response. Phasing tests are typically performed on a fully integrated and assembled flight vehicle where sensors are stimulated by moving the vehicle and the effectors are observed for proper polarity. Orion Multi-Purpose Crew Vehicle (MPCV) Pad Abort 1 (PA-1) Phasing Test was conducted from inertial measurement to Launch Abort System (LAS). Orion Exploration Flight Test 1 (EFT-1) has two end-to-end phasing tests planned. The first test from inertial measurement to Crew Module (CM) reaction control system thrusters uses navigation and flight control system software algorithms to process commands. The second test from inertial measurement to CM S-Band Phased Array Antenna (PAA) uses navigation and communication system software algorithms to process commands. Future Orion flights include Ascent Abort Flight Test 2 (AA-2) and Exploration Mission 1 (EM-1). These flights will include additional or updated sensors, software algorithms and effectors. This paper will explore the implementation of end-to-end phasing tests on a flight vehicle which has many constraints, trade-offs and compromises. Orion PA-1 Phasing Test was conducted at White Sands Missile Range (WSMR) from March 4-6, 2010. This test decreased the risk of mission failure by demonstrating proper flight control system polarity. Demonstration was achieved by stimulating the primary navigation sensor, processing sensor data to commands and viewing propulsion response. PA-1 primary navigation sensor was a Space Integrated Inertial Navigation System (INS) and Global Positioning System (GPS) (SIGI) which has onboard processing, INS (3 accelerometers and 3 rate gyros) and no GPS receiver. SIGI data was processed by GN&C software into thrust magnitude and direction commands. The processing changes through three phases of powered flight: pitchover, downrange and reorientation. The primary inputs to GN&C are attitude position, attitude rates, angle of attack (AOA) and angle of sideslip (AOS). Pitch and yaw attitude and attitude rate responses were verified by using a flight spare SIGI mounted to a 2-axis rate table. AOA and AOS responses were verified by using a data recorded from SIGI movements on a robotic arm located at NASA Johnson Space Center. The data was consolidated and used in an open-loop data input to the SIGI. Propulsion was the Launch Abort System (LAS) Attitude Control Motor (ACM) which consisted of a solid motor with 8 nozzles. Each nozzle has active thrust control by varying throat area with a pintle. LAS ACM pintles are observable through optically transparent nozzle covers. SIGI movements on robot arm, SIGI rate table movements and LAS ACM pintle responses were video recorded as test artifacts for analysis and evaluation. The PA-1 Phasing Test design was determined based on test performance requirements, operational restrictions and EGSE capabilities. This development progressed during different stages. For convenience these development stages are initial, working group, tiger team, Engineering Review Team (ERT) and final.
NASA Technical Reports Server (NTRS)
Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen
2015-01-01
The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes.
Motion-seeded object-based attention for dynamic visual imagery
NASA Astrophysics Data System (ADS)
Huber, David J.; Khosla, Deepak; Kim, Kyungnam
2017-05-01
This paper† describes a novel system that finds and segments "objects of interest" from dynamic imagery (video) that (1) processes each frame using an advanced motion algorithm that pulls out regions that exhibit anomalous motion, and (2) extracts the boundary of each object of interest using a biologically-inspired segmentation algorithm based on feature contours. The system uses a series of modular, parallel algorithms, which allows many complicated operations to be carried out by the system in a very short time, and can be used as a front-end to a larger system that includes object recognition and scene understanding modules. Using this method, we show 90% accuracy with fewer than 0.1 false positives per frame of video, which represents a significant improvement over detection using a baseline attention algorithm.
NASA Technical Reports Server (NTRS)
Savage, M.; Mackulin, M. J.; Coe, H. H.; Coy, J. J.
1991-01-01
Optimization procedures allow one to design a spur gear reduction for maximum life and other end use criteria. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial guess values. The optimization algorithm is described, and the models for gear life and performance are presented. The algorithm is compact and has been programmed for execution on a desk top computer. Two examples are presented to illustrate the method and its application.
2012-09-03
prac- tice to solve these initial value problems. Additionally, the predictor / corrector methods are combined with adaptive stepsize and adaptive ...for implementing a numerical path tracking algorithm is to decide which predictor / corrector method to employ, how large to take the step ∆t, and what...the endgame algorithm . Output: A steady state solution Set ǫ = 1 while ǫ >= ǫend do set the stepsize ∆ǫ by using adaptive stepsize control algorithm
Traffic off-balancing algorithm for energy efficient networks
NASA Astrophysics Data System (ADS)
Kim, Junhyuk; Lee, Chankyun; Rhee, June-Koo Kevin
2011-12-01
Physical layer of high-end network system uses multiple interface arrays. Under the load-balancing perspective, light load can be distributed to multiple interfaces. However, it can cause energy inefficiency in terms of the number of poor utilization interfaces. To tackle this energy inefficiency, traffic off-balancing algorithm for traffic adaptive interface sleep/awake is investigated. As a reference model, 40G/100G Ethernet is investigated. We report that suggested algorithm can achieve energy efficiency while satisfying traffic transmission requirement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. Cuevas, B. Raydo, H. Dong, A. Gupta, F.J. Barbosa, J. Wilson, W.M. Taylor, E. Jastrzembski, D. Abbott
We will demonstrate a hardware and firmware solution for a complete fully pipelined multi-crate trigger system that takes advantage of the elegant high speed VXS serial extensions for VME. This trigger system includes three sections starting with the front end crate trigger processor (CTP), a global Sub-System Processor (SSP) and a Trigger Supervisor that manages the timing, synchronization and front end event readout. Within a front end crate, trigger information is gathered from each 16 Channel, 12 bit Flash ADC module at 4 nS intervals via the VXS backplane, to a Crate Trigger Processor (CTP). Each Crate Trigger Processor receivesmore » these 500 MB/S VXS links from the 16 FADC-250 modules, aligns skewed data inherent of Aurora protocol, and performs real time crate level trigger algorithms. The algorithm results are encoded using a Reed-Solomon technique and transmission of this Level 1 trigger data is sent to the SSP using a multi-fiber link. The multi-fiber link achieves an aggregate trigger data transfer rate to the global trigger at 8 Gb/s. The SSP receives and decodes Reed-Solomon error correcting transmission from each crate, aligns the data, and performs the global level trigger algorithms. The entire trigger system is synchronous and operates at 250 MHz with the Trigger Supervisor managing not only the front end event readout, but also the distribution of the critical timing clocks, synchronization signals, and the global trigger signals to each front end readout crate. These signals are distributed to the front end crates on a separate fiber link and each crate is synchronized using a unique encoding scheme to guarantee that each front end crate is synchronous with a fixed latency, independent of the distance between each crate. The overall trigger signal latency is <3 uS, and the proposed 12GeV experiments at Jefferson Lab require up to 200KHz Level 1 trigger rate.« less
Method for hyperspectral imagery exploitation and pixel spectral unmixing
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2003-01-01
An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.
Subjective time pressure: general or domain specific?
Kleiner, Sibyl
2014-09-01
Chronic time pressure has been identified as a pervasive societal problem, exacerbated by high demands of the labor market and the home. Yet time pressure has not been disaggregated and examined separately across home and work contexts, leaving many unanswered questions regarding the sources and potentially stressful consequences of time pressure. Using data collected in the United States General Social Survey waves 2002 and 2004, this study disaggregates time pressure into the domains of home and work, and asks whether considering time pressures within distinct work and home contexts reveals distinct predictors or associations with stress. Findings show that both predictors and stress associations differ across work and home pressures, revealing both methodological and theoretical implications for the study of time pressure and work and family life more generally. Copyright © 2014 Elsevier Inc. All rights reserved.
Continuation Power Flow Analysis for PV Integration Studies at Distribution Feeders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jiyu; Zhu, Xiangqi; Lubkeman, David L.
2017-10-30
This paper presents a method for conducting continuation power flow simulation on high-solar penetration distribution feeders. A load disaggregation method is developed to disaggregate the daily feeder load profiles collected in substations down to each load node, where the electricity consumption of residential houses and commercial buildings are modeled using actual data collected from single family houses and commercial buildings. This allows the modeling of power flow and voltage profile along a distribution feeder on a continuing fashion for a 24- hour period at minute-by-minute resolution. By separating the feeder into load zones based on the distance between the loadmore » node and the feeder head, we studied the impact of PV penetration on distribution grid operation in different seasons and under different weather conditions for different PV placements.« less
Preparation of Amyloid Fibrils Seeded from Brain and Meninges.
Scherpelz, Kathryn P; Lu, Jun-Xia; Tycko, Robert; Meredith, Stephen C
2016-01-01
Seeding of amyloid fibrils into fresh solutions of the same peptide or protein in disaggregated form leads to the formation of replicate fibrils, with close structural similarity or identity to the original fibrillar seeds. Here we describe procedures for isolating fibrils composed mainly of β-amyloid (Aβ) from human brain and from leptomeninges, a source of cerebral blood vessels, for investigating Alzheimer's disease and cerebral amyloid angiopathy. We also describe methods for seeding isotopically labeled, disaggregated Aβ peptide solutions for study using solid-state NMR and other techniques. These methods should be applicable to other types of amyloid fibrils, to Aβ fibrils from mice or other species, tissues other than brain, and to some non-fibrillar aggregates. These procedures allow for the examination of authentic amyloid fibrils and other protein aggregates from biological tissues without the need for labeling the tissue.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedrich, Jon M.; Rivers, Mark L.; Perlowitz, Michael A.
We show that synchrotron x-ray microtomography ({mu}CT) followed by digital data extraction can be used to examine the size distribution and particle morphologies of the polydisperse (750 to 2450 {micro}m diameter) particle size standard NIST 1019b. Our size distribution results are within errors of certified values with data collected at 19.5 {micro}m/voxel. One of the advantages of using {mu}CT to investigate the particles examined here is that the morphology of the glass beads can be directly examined. We use the shape metrics aspect ratio and sphericity to examine of individual standard beads morphologies as a function of spherical equivalent diameters.more » We find that the majority of standard beads possess near-spherical aspect ratios and sphericities, but deviations are present at the lower end of the size range. The majority (> 98%) of particles also possess an equant form when examined using a common measure of equidimensionality. Although the NIST 1019b standard consists of loose particles, we point out that an advantage of {mu}CT is that coherent materials comprised of particles can be examined without disaggregation.« less
Regional stochastic generation of streamflows using an ARIMA (1,0,1) process and disaggregation
Armbruster, Jeffrey T.
1979-01-01
An ARIMA (1,0,1) model was calibrated and used to generate long annual flow sequences at three sites in the Juniata River basin, Pennsylvania. The model preserves the mean, variance, and cross correlations of the observed station data. In addition, it has a desirable blend of both high and low frequency characteristics and therefore is capable of preserving the Hurst coefficient, h. The generated annual flows are disaggregated into monthly sequences using a modification of the Valencia-Schaake model. The low-flow frequency and flow duration characteristics of the generated monthly flows, with length equal to the historical data, compare favorably with the historical data. Once the models were verified, 100-year sequences were generated and analyzed for their low flow characteristics. One-, three- and six- month low-flow frequencies at recurrence intervals greater than 10 years are generally found to be lower than flow computed from the historical flows. A method is proposed for synthesizing flows at ungaged sites. (Kosco-USGS)
Characterize older driver behavior for traffic simulation and vehicle emission model.
DOT National Transportation Integrated Search
2012-05-01
The use of traffic simulation models is becoming more widespread as a means of : assessing traffic, safety and environmental impacts as a result of infrastructure, control and : operational changes at disaggregate levels. It is imperative that these ...
Supercomputing resources empowering superstack with interactive and integrated systems
NASA Astrophysics Data System (ADS)
Rückemann, Claus-Peter
2012-09-01
This paper presents the results from the development and implementation of Superstack algorithms to be dynamically used with integrated systems and supercomputing resources. Processing of geophysical data, thus named geoprocessing, is an essential part of the analysis of geoscientific data. The theory of Superstack algorithms and the practical application on modern computing architectures was inspired by developments introduced with processing of seismic data on mainframes and within the last years leading to high end scientific computing applications. There are several stacking algorithms known but with low signal to noise ratio in seismic data the use of iterative algorithms like the Superstack can support analysis and interpretation. The new Superstack algorithms are in use with wave theory and optical phenomena on highly performant computing resources for huge data sets as well as for sophisticated application scenarios in geosciences and archaeology.
NASA Technical Reports Server (NTRS)
Maier, Launa M.; Huddleston, Lisa L.
2017-01-01
Kennedy Space Center (KSC) operations are located in a region which experiences one of the highest lightning densities across the United States. As a result, on average, KSC loses almost 30 minutes of operational availability each day for lightning sensitive activities. KSC is investigating using existing instrumentation and automated algorithms to improve the timeliness and accuracy of lightning warnings. Additionally, the automation routines will be warning on a grid to minimize under-warnings associated with not being located in the center of the warning area and over-warnings associated with encompassing too large an area. This study discusses utilization of electric field mill data to provide improved warning times. Specifically, this paper will demonstrate improved performance of an enveloping algorithm of the electric field mill data as compared with the electric field zero crossing to identify initial storm electrification. End-of-Storm-Oscillation (EOSO) identification algorithms will also be analyzed to identify performance improvement, if any, when compared with 30 minutes after the last lightning flash.
Information theoretic analysis of linear shift-invariant edge-detection operators
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2012-06-01
Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.
Jin, Zhigang; Ma, Yingying; Su, Yishan; Li, Shuo; Fu, Xiaomei
2017-07-19
Underwater sensor networks (UWSNs) have become a hot research topic because of their various aquatic applications. As the underwater sensor nodes are powered by built-in batteries which are difficult to replace, extending the network lifetime is a most urgent need. Due to the low and variable transmission speed of sound, the design of reliable routing algorithms for UWSNs is challenging. In this paper, we propose a Q-learning based delay-aware routing (QDAR) algorithm to extend the lifetime of underwater sensor networks. In QDAR, a data collection phase is designed to adapt to the dynamic environment. With the application of the Q-learning technique, QDAR can determine a global optimal next hop rather than a greedy one. We define an action-utility function in which residual energy and propagation delay are both considered for adequate routing decisions. Thus, the QDAR algorithm can extend the network lifetime by uniformly distributing the residual energy and provide lower end-to-end delay. The simulation results show that our protocol can yield nearly the same network lifetime, and can reduce the end-to-end delay by 20-25% compared with a classic lifetime-extended routing protocol (QELAR).
Ma, Yingying; Su, Yishan; Li, Shuo; Fu, Xiaomei
2017-01-01
Underwater sensor networks (UWSNs) have become a hot research topic because of their various aquatic applications. As the underwater sensor nodes are powered by built-in batteries which are difficult to replace, extending the network lifetime is a most urgent need. Due to the low and variable transmission speed of sound, the design of reliable routing algorithms for UWSNs is challenging. In this paper, we propose a Q-learning based delay-aware routing (QDAR) algorithm to extend the lifetime of underwater sensor networks. In QDAR, a data collection phase is designed to adapt to the dynamic environment. With the application of the Q-learning technique, QDAR can determine a global optimal next hop rather than a greedy one. We define an action-utility function in which residual energy and propagation delay are both considered for adequate routing decisions. Thus, the QDAR algorithm can extend the network lifetime by uniformly distributing the residual energy and provide lower end-to-end delay. The simulation results show that our protocol can yield nearly the same network lifetime, and can reduce the end-to-end delay by 20–25% compared with a classic lifetime-extended routing protocol (QELAR). PMID:28753951
NASA Astrophysics Data System (ADS)
Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling
2017-09-01
In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters
NASA Astrophysics Data System (ADS)
Chapman, Martin Colby
1998-12-01
The design earthquake selection problem is fundamentally probabilistic. Disaggregation of a probabilistic model of the seismic hazard offers a rational and objective approach that can identify the most likely earthquake scenario(s) contributing to hazard. An ensemble of time series can be selected on the basis of the modal earthquakes derived from the disaggregation. This gives a useful time-domain realization of the seismic hazard, to the extent that a single motion parameter captures the important time-domain characteristics. A possible limitation to this approach arises because most currently available motion prediction models for peak ground motion or oscillator response are essentially independent of duration, and modal events derived using the peak motions for the analysis may not represent the optimal characterization of the hazard. The elastic input energy spectrum is an alternative to the elastic response spectrum for these types of analyses. The input energy combines the elements of amplitude and duration into a single parameter description of the ground motion that can be readily incorporated into standard probabilistic seismic hazard analysis methodology. This use of the elastic input energy spectrum is examined. Regression analysis is performed using strong motion data from Western North America and consistent data processing procedures for both the absolute input energy equivalent velocity, (Vsbea), and the elastic pseudo-relative velocity response (PSV) in the frequency range 0.5 to 10 Hz. The results show that the two parameters can be successfully fit with identical functional forms. The dependence of Vsbea and PSV upon (NEHRP) site classification is virtually identical. The variance of Vsbea is uniformly less than that of PSV, indicating that Vsbea can be predicted with slightly less uncertainty as a function of magnitude, distance and site classification. The effects of site class are important at frequencies less than a few Hertz. The regression modeling does not resolve significant effects due to site class at frequencies greater than approximately 5 Hz. Disaggregation of general seismic hazard models using Vsbea indicates that the modal magnitudes for the higher frequency oscillators tend to be larger, and vary less with oscillator frequency, than those derived using PSV. Insofar as the elastic input energy may be a better parameter for quantifying the damage potential of ground motion, its use in probabilistic seismic hazard analysis could provide an improved means for selecting earthquake scenarios and establishing design earthquakes for many types of engineering analyses.
Simulation results for a finite element-based cumulative reconstructor
NASA Astrophysics Data System (ADS)
Wagner, Roland; Neubauer, Andreas; Ramlau, Ronny
2017-10-01
Modern ground-based telescopes rely on adaptive optics (AO) systems for the compensation of image degradation caused by atmospheric turbulences. Within an AO system, measurements of incoming light from guide stars are used to adjust deformable mirror(s) in real time that correct for atmospheric distortions. The incoming wavefront has to be derived from sensor measurements, and this intermediate result is then translated into the shape(s) of the deformable mirror(s). Rapid changes of the atmosphere lead to the need for fast wavefront reconstruction algorithms. We review a fast matrix-free algorithm that was developed by Neubauer to reconstruct the incoming wavefront from Shack-Hartmann measurements based on a finite element discretization of the telescope aperture. The method is enhanced by a domain decomposition ansatz. We show that this algorithm reaches the quality of standard approaches in end-to-end simulation while at the same time maintaining the speed of recently introduced solvers with linear order speed.
Wavelet methods in multi-conjugate adaptive optics
NASA Astrophysics Data System (ADS)
Helin, T.; Yudytskiy, M.
2013-08-01
The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.
Finding Blackbody Temperature and Emissivity on a Sub-Pixel Scale
NASA Astrophysics Data System (ADS)
Bernstein, D. J.; Bausell, J.; Grigsby, S.; Kudela, R. M.
2015-12-01
Surface temperature and emissivity provide important insight into the ecosystem being remotely sensed. Dozier (1981) proposed a an algorithm to solve for percent coverage and temperatures of two different surface types (e.g. sea surface, cloud cover, etc.) within a given pixel, with a constant value for emissivity assumed. Here we build on Dozier (1981) by proposing an algorithm that solves for both temperature and emissivity of a water body within a satellite pixel by assuming known percent coverage of surface types within the pixel. Our algorithm generates thermal infrared (TIR) and emissivity end-member spectra for the two surface types. Our algorithm then superposes these end-member spectra on emissivity and TIR spectra emitted from four pixels with varying percent coverage of different surface types. The algorithm was tested preliminarily (48 iterations) using simulated pixels containing more than one surface type, with temperature and emissivity percent errors of ranging from 0 to 1.071% and 2.516 to 15.311% respectively[1]. We then tested the algorithm using a MASTER image from MASTER collected as part of the NASA Student Airborne Research Program (NASA SARP). Here the temperature of water was calculated to be within 0.22 K of in situ data. The algorithm calculated emissivity of water with an accuracy of 0.13 to 1.53% error for Salton Sea pixels collected with MASTER, also collected as part of NASA SARP. This method could improve retrievals for the HyspIRI sensor. [1] Percent error for emissivity was generated by averaging percent error across all selected bands widths.
An optical systems analysis approach to image resampling
NASA Technical Reports Server (NTRS)
Lyon, Richard G.
1997-01-01
All types of image registration require some type of resampling, either during the registration or as a final step in the registration process. Thus the image(s) must be regridded into a spatially uniform, or angularly uniform, coordinate system with some pre-defined resolution. Frequently the ending resolution is not the resolution at which the data was observed with. The registration algorithm designer and end product user are presented with a multitude of possible resampling methods each of which modify the spatial frequency content of the data in some way. The purpose of this paper is threefold: (1) to show how an imaging system modifies the scene from an end to end optical systems analysis approach, (2) to develop a generalized resampling model, and (3) empirically apply the model to simulated radiometric scene data and tabulate the results. A Hanning windowed sinc interpolator method will be developed based upon the optical characterization of the system. It will be discussed in terms of the effects and limitations of sampling, aliasing, spectral leakage, and computational complexity. Simulated radiometric scene data will be used to demonstrate each of the algorithms. A high resolution scene will be "grown" using a fractal growth algorithm based on mid-point recursion techniques. The result scene data will be convolved with a point spread function representing the optical response. The resultant scene will be convolved with the detection systems response and subsampled to the desired resolution. The resultant data product will be subsequently resampled to the correct grid using the Hanning windowed sinc interpolator and the results and errors tabulated and discussed.
Joint Cost, Production Technology and Output Disaggregation in Regulated Motor Carriers
DOT National Transportation Integrated Search
1978-11-01
The study uses a sample of 252 Class I Instruction 27 Motor Carriers (Instruction 27 carriers earned at least 75 percent of their revenues from intercity transportation of general commodities over a three year period) of general freight that existed ...
45 CFR 286.260 - May Tribes use sampling and electronic filing?
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 2 2013-10-01 2012-10-01 true May Tribes use sampling and electronic filing? 286... TRIBAL TANF PROVISIONS Data Collection and Reporting Requirements § 286.260 May Tribes use sampling and electronic filing? (a) Each Tribe may report disaggregated data on all recipient families (universal...
45 CFR 286.260 - May Tribes use sampling and electronic filing?
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 2 2012-10-01 2012-10-01 false May Tribes use sampling and electronic filing? 286... TRIBAL TANF PROVISIONS Data Collection and Reporting Requirements § 286.260 May Tribes use sampling and electronic filing? (a) Each Tribe may report disaggregated data on all recipient families (universal...
45 CFR 286.260 - May Tribes use sampling and electronic filing?
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 2 2014-10-01 2012-10-01 true May Tribes use sampling and electronic filing? 286... TRIBAL TANF PROVISIONS Data Collection and Reporting Requirements § 286.260 May Tribes use sampling and electronic filing? (a) Each Tribe may report disaggregated data on all recipient families (universal...
Robust non-rigid registration algorithm based on local affine registration
NASA Astrophysics Data System (ADS)
Wu, Liyang; Xiong, Lei; Du, Shaoyi; Bi, Duyan; Fang, Ting; Liu, Kun; Wu, Dongpeng
2018-04-01
Aiming at the problem that the traditional point set non-rigid registration algorithm has low precision and slow convergence speed for complex local deformation data, this paper proposes a robust non-rigid registration algorithm based on local affine registration. The algorithm uses a hierarchical iterative method to complete the point set non-rigid registration from coarse to fine. In each iteration, the sub data point sets and sub model point sets are divided and the shape control points of each sub point set are updated. Then we use the control point guided affine ICP algorithm to solve the local affine transformation between the corresponding sub point sets. Next, the local affine transformation obtained by the previous step is used to update the sub data point sets and their shape control point sets. When the algorithm reaches the maximum iteration layer K, the loop ends and outputs the updated sub data point sets. Experimental results demonstrate that the accuracy and convergence of our algorithm are greatly improved compared with the traditional point set non-rigid registration algorithms.
Improving Spectral Image Classification through Band-Ratio Optimization and Pixel Clustering
NASA Astrophysics Data System (ADS)
O'Neill, M.; Burt, C.; McKenna, I.; Kimblin, C.
2017-12-01
The Underground Nuclear Explosion Signatures Experiment (UNESE) seeks to characterize non-prompt observables from underground nuclear explosions (UNE). As part of this effort, we evaluated the ability of DigitalGlobe's WorldView-3 (WV3) to detect and map UNE signatures. WV3 is the current state-of-the-art, commercial, multispectral imaging satellite; however, it has relatively limited spectral and spatial resolutions. These limitations impede image classifiers from detecting targets that are spatially small and lack distinct spectral features. In order to improve classification results, we developed custom algorithms to reduce false positive rates while increasing true positive rates via a band-ratio optimization and pixel clustering front-end. The clusters resulting from these algorithms were processed with standard spectral image classifiers such as Mixture-Tuned Matched Filter (MTMF) and Adaptive Coherence Estimator (ACE). WV3 and AVIRIS data of Cuprite, Nevada, were used as a validation data set. These data were processed with a standard classification approach using MTMF and ACE algorithms. They were also processed using the custom front-end prior to the standard approach. A comparison of the results shows that the custom front-end significantly increases the true positive rate and decreases the false positive rate.This work was done by National Security Technologies, LLC, under Contract No. DE-AC52-06NA25946 with the U.S. Department of Energy. DOE/NV/25946-3283.
NASA Astrophysics Data System (ADS)
Moralis-Pegios, M.; Terzenidis, N.; Mourgias-Alexandris, G.; Vyrsokinos, K.; Pleros, N.
2018-02-01
Disaggregated Data Centers (DCs) have emerged as a powerful architectural framework towards increasing resource utilization and system power efficiency, requiring, however, a networking infrastructure that can ensure low-latency and high-bandwidth connectivity between a high-number of interconnected nodes. This reality has been the driving force towards high-port count and low-latency optical switching platforms, with recent efforts concluding that the use of distributed control architectures as offered by Broadcast-and-Select (BS) layouts can lead to sub-μsec latencies. However, almost all high-port count optical switch designs proposed so far rely either on electronic buffering and associated SerDes circuitry for resolving contention or on buffer-less designs with packet drop and re-transmit procedures, unavoidably increasing latency or limiting throughput. In this article, we demonstrate a 256x256 optical switch architecture for disaggregated DCs that employs small-size optical delay line buffering in a distributed control scheme, exploiting FPGA-based header processing over a hybrid BS/Wavelength routing topology that is implemented by a 16x16 BS design and a 16x16 AWGR. Simulation-based performance analysis reveals that even the use of a 2- packet optical buffer can yield <620nsec latency with >85% throughput for up to 100% loads. The switch has been experimentally validated with 10Gb/s optical data packets using 1:16 optical splitting and a SOA-MZI wavelength converter (WC) along with fiber delay lines for the 2-packet buffer implementation at every BS outgoing port, followed by an additional SOA-MZI tunable WC and the 16x16 AWGR. Error-free performance in all different switch input/output combinations has been obtained with a power penalty of <2.5dB.
Moradi, Najmeh; Rashidian, Arash; Rasekh, Hamid Reza; Olyaeemanesh, Alireza; Foroughi, Mahnoosh; Mohammadi, Teymoor
2017-01-01
The aim of this study was to estimate the monetary value of a QALY among patients with heart disease and to identify its determinants. A cross-sectional survey was conducted through face-to-face interview on 196 patients with cardiovascular disease from two heart hospitals in Tehran, Iran, to estimate the value of QALY using disaggregated and aggregated approaches. The EuroQol-5 Dimension (EQ-5D) questionnaire, Visual Analogue Scale (VAS), Time Trade-Off (TTO) and contingent valuation WTP techniques were employed, first to elicit patients’ preferences and then, to estimate WTP for QALY. The association of patients’ characteristics with WTP for QALY, was assessed through Heckman selection model. The Mean willingness to pay per QALY, estimated by the disaggregated approach ranged from 2,799 to 3599 US dollars. It is higher than the values, estimated from aggregated methods (USD 2,256 to 3,137). However, in both approaches, the values were less than one Gross Domestic Product (GDP) per capita of Iran. Significant variables were: Current health state, education, age, marital status, number of comorbidities, and household’s cost group. Our results challenge two major issues: the first, is a policy challenge which concerns the WHO recommendation to use less than 3 GDP per capita as a cost-effectiveness threshold value. The second, is an analytical challenge related to patients with zero QALY gain. More scrutiny is suggested on the issue of how patients with full health state valuation should be dealt with and what arbitrary value could be included in the estimation value of QALY when the disaggregated approach used. PMID:28979338
Moradi, Najmeh; Rashidian, Arash; Rasekh, Hamid Reza; Olyaeemanesh, Alireza; Foroughi, Mahnoosh; Mohammadi, Teymoor
2017-01-01
The aim of this study was to estimate the monetary value of a QALY among patients with heart disease and to identify its determinants. A cross-sectional survey was conducted through face-to-face interview on 196 patients with cardiovascular disease from two heart hospitals in Tehran, Iran, to estimate the value of QALY using disaggregated and aggregated approaches. The EuroQol-5 Dimension (EQ-5D) questionnaire, Visual Analogue Scale (VAS), Time Trade-Off (TTO) and contingent valuation WTP techniques were employed, first to elicit patients' preferences and then, to estimate WTP for QALY. The association of patients' characteristics with WTP for QALY, was assessed through Heckman selection model. The Mean willingness to pay per QALY, estimated by the disaggregated approach ranged from 2,799 to 3599 US dollars. It is higher than the values, estimated from aggregated methods (USD 2,256 to 3,137). However, in both approaches, the values were less than one Gross Domestic Product (GDP) per capita of Iran. Significant variables were: Current health state, education, age, marital status, number of comorbidities, and household's cost group. Our results challenge two major issues: the first, is a policy challenge which concerns the WHO recommendation to use less than 3 GDP per capita as a cost-effectiveness threshold value. The second, is an analytical challenge related to patients with zero QALY gain. More scrutiny is suggested on the issue of how patients with full health state valuation should be dealt with and what arbitrary value could be included in the estimation value of QALY when the disaggregated approach used.
45 CFR 265.5 - May States use sampling?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 2 2010-10-01 2010-10-01 false May States use sampling? 265.5 Section 265.5... REQUIREMENTS § 265.5 May States use sampling? (a) Each State may report the disaggregated data in the TANF Data... the use of a scientifically acceptable sampling method that we have approved. States may use sampling...
45 CFR 265.5 - May States use sampling?
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 2 2011-10-01 2011-10-01 false May States use sampling? 265.5 Section 265.5... REQUIREMENTS § 265.5 May States use sampling? (a) Each State may report the disaggregated data in the TANF Data... the use of a scientifically acceptable sampling method that we have approved. States may use sampling...
Optimal Coordination of Building Loads and Energy Storage for Power Grid and End User Services
Hao, He; Wu, Di; Lian, Jianming; ...
2017-01-18
Demand response and energy storage play a profound role in the smart grid. The focus of this study is to evaluate benefits of coordinating flexible loads and energy storage to provide power grid and end user services. We present a Generalized Battery Model (GBM) to describe the flexibility of building loads and energy storage. An optimization-based approach is proposed to characterize the parameters (power and energy limits) of the GBM for flexible building loads. We then develop optimal coordination algorithms to provide power grid and end user services such as energy arbitrage, frequency regulation, spinning reserve, as well as energymore » cost and demand charge reduction. Several case studies have been performed to demonstrate the efficacy of the GBM and coordination algorithms, and evaluate the benefits of using their flexibility for power grid and end user services. We show that optimal coordination yields significant cost savings and revenue. Moreover, the best option for power grid services is to provide energy arbitrage and frequency regulation. Finally and furthermore, when coordinating flexible loads with energy storage to provide end user services, it is recommended to consider demand charge in addition to time-of-use price in order to flatten the aggregate power profile.« less
A method for velocity signal reconstruction of AFDISAR/PDV based on crazy-climber algorithm
NASA Astrophysics Data System (ADS)
Peng, Ying-cheng; Guo, Xian; Xing, Yuan-ding; Chen, Rong; Li, Yan-jie; Bai, Ting
2017-10-01
The resolution of Continuous wavelet transformation (CWT) is different when the frequency is different. For this property, the time-frequency signal of coherent signal obtained by All Fiber Displacement Interferometer System for Any Reflector (AFDISAR) is extracted. Crazy-climber Algorithm is adopted to extract wavelet ridge while Velocity history curve of the measuring object is obtained. Numerical simulation is carried out. The reconstruction signal is completely consistent with the original signal, which verifies the accuracy of the algorithm. Vibration of loudspeaker and free end of Hopkinson incident bar under impact loading are measured by AFDISAR, and the measured coherent signals are processed. Velocity signals of loudspeaker and free end of Hopkinson incident bar are reconstructed respectively. Comparing with the theoretical calculation, the particle vibration arrival time difference error of the free end of Hopkinson incident bar is 2μs. It is indicated from the results that the algorithm is of high accuracy, and is of high adaptability to signals of different time-frequency feature. The algorithm overcomes the limitation of modulating the time window artificially according to the signal variation when adopting STFT, and is suitable for extracting signal measured by AFDISAR.
Taylor, Ian M; Ntoumanis, Nikos; Standage, Martyn; Spray, Christopher M
2010-02-01
Grounded in self-determination theory (SDT; Deci & Ryan, 2000), the current study explored whether physical education (PE) students' psychological needs and their motivational regulations toward PE predicted mean differences and changes in effort in PE, exercise intentions, and leisure-time physical activity (LTPA) over the course of one UK school trimester. One hundred and seventy-eight students (69% male) aged between 11 and 16 years completed a multisection questionnaire at the beginning, middle, and end of a school trimester. Multilevel growth models revealed that students' perceived competence and self-determined regulations were the most consistent predictors of the outcome variables at the within- and between-person levels. The results of this work add to the extant SDT-based literature by examining change in PE students' motivational regulations and psychological needs, as well as underscoring the importance of disaggregating within- and between-student effects.
Newman, D M; Hawley, R W; Goeckel, D L; Crawford, R D; Abraham, S; Gallagher, N C
1993-05-10
An efficient storage format was developed for computer-generated holograms for use in electron-beam lithography. This method employs run-length encoding and Lempel-Ziv-Welch compression and succeeds in exposing holograms that were previously infeasible owing to the hologram's tremendous pattern-data file size. These holograms also require significant computation; thus the algorithm was implemented on a parallel computer, which improved performance by 2 orders of magnitude. The decompression algorithm was integrated into the Cambridge electron-beam machine's front-end processor.Although this provides much-needed ability, some hardware enhancements will be required in the future to overcome inadequacies in the current front-end processor that result in a lengthy exposure time.
NASA GPM GV Science Implementation
NASA Technical Reports Server (NTRS)
Petersen, W. A.
2009-01-01
Pre-launch algorithm development & post-launch product evaluation: The GPM GV paradigm moves beyond traditional direct validation/comparison activities by incorporating improved algorithm physics & model applications (end-to-end validation) in the validation process. Three approaches: 1) National Network (surface): Operational networks to identify and resolve first order discrepancies (e.g., bias) between satellite and ground-based precipitation estimates. 2) Physical Process (vertical column): Cloud system and microphysical studies geared toward testing and refinement of physically-based retrieval algorithms. 3) Integrated (4-dimensional): Integration of satellite precipitation products into coupled prediction models to evaluate strengths/limitations of satellite precipitation producers.
Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.
Wang, Charlie C L; Manocha, Dinesh
2013-01-01
We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.
Wireless Sensor Network Metrics for Real-Time Systems
2009-05-20
to compute the probability of end-to-end packet delivery as a function of latency, the expected radio energy consumption on the nodes from relaying... schedules for WSNs. Particularly, we focus on the impact scheduling has on path diversity, using short repeating schedules and Greedy Maximal Matching...a greedy algorithm for constructing a mesh routing topology. Finally, we study the implications of using distributed scheduling schemes to generate
Islam, Nadia Shilpi; Khan, Suhaila; Kwon, Simona; Jang, Deeana; Ro, Marguerite; Trinh-Shevrin, Chau
2011-01-01
There are close to 15 million Asian Americans living in the United States, and they represent the fastest growing populations in the country. By the year 2050, there will be an estimated 33.4 million Asian Americans living in the country. However, their health needs remain poorly understood and there is a critical lack of data disaggregated by Asian American ethnic subgroups, primary language, and geography. This paper examines methodological issues, challenges, and potential solutions to addressing the collection, analysis, and reporting of disaggregated (or, granular) data on Asian Americans. The article explores emerging efforts to increase granular data through the use of innovative study design and analysis techniques. Concerted efforts to implement these techniques will be critical to the future development of sound research, health programs, and policy efforts targeting this and other minority populations. PMID:21099084
Disentangling WTP per QALY data: different analytical approaches, different answers.
Gyrd-Hansen, Dorte; Kjaer, Trine
2012-03-01
A large random sample of the Danish general population was asked to value health improvements by way of both the time trade-off elicitation technique and willingness-to-pay (WTP) using contingent valuation methods. The data demonstrate a high degree of heterogeneity across respondents in their relative valuations on the two scales. This has implications for data analysis. We show that the estimates of WTP per QALY are highly sensitive to the analytical strategy. For both open-ended and dichotomous choice data we demonstrate that choice of aggregated approach (ratios of means) or disaggregated approach (means of ratios) affects estimates markedly as does the interpretation of the constant term (which allows for disproportionality across the two scales) in the regression analyses. We propose that future research should focus on why some respondents are unwilling to trade on the time trade-off scale, on how to interpret the constant value in the regression analyses, and on how best to capture the heterogeneity in preference structures when applying mixed multinomial logit. Copyright © 2011 John Wiley & Sons, Ltd.
DOT National Transportation Integrated Search
1997-01-01
Discrete choice models have expanded the ability of transportation planners to forecast future trends. Where new services or policies are proposed, the stated-choice approach can provide an objective basis for forecasts. Stated-choice models are subj...
POLYNOMIAL-BASED DISAGGREGATION OF HOURLY RAINFALL FOR CONTINUOUS HYDROLOGIC SIMULATION
Hydrologic modeling of urban watersheds for designs and analyses of stormwater conveyance facilities can be performed in either an event-based or continuous fashion. Continuous simulation requires, among other things, the use of a time series of rainfall amounts. However, for urb...
NASA Technical Reports Server (NTRS)
Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David
2015-01-01
The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and detection and responses that can be tested in VMET and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM. The plan for VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.
Mazurana, Dyan; Benelli, Prisca; Walker, Peter
2013-07-01
Humanitarian aid remains largely driven by anecdote rather than by evidence. The contemporary humanitarian system has significant weaknesses with regard to data collection, analysis, and action at all stages of response to crises involving armed conflict or natural disaster. This paper argues that humanitarian actors can best determine and respond to vulnerabilities and needs if they use sex- and age-disaggregated data (SADD) and gender and generational analyses to help shape their assessments of crises-affected populations. Through case studies, the paper shows how gaps in information on sex and age limit the effectiveness of humanitarian response in all phases of a crisis. The case studies serve to show how proper collection, use, and analysis of SADD enable operational agencies to deliver assistance more effectively and efficiently. The evidence suggests that the employment of SADD and gender and generational analyses assists in saving lives and livelihoods in a crisis. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.
Mapping Urban Risk: Flood Hazards, Race, & Environmental Justice In New York”
Maantay, Juliana; Maroko, Andrew
2009-01-01
This paper demonstrates the importance of disaggregating population data aggregated by census tracts or other units, for more realistic population distribution/location. A newly-developed mapping method, the Cadastral-based Expert Dasymetric System (CEDS), calculates population in hyper-heterogeneous urban areas better than traditional mapping techniques. A case study estimating population potentially impacted by flood hazard in New York City compares the impacted population determined by CEDS with that derived by centroid-containment method and filtered areal weighting interpolation. Compared to CEDS, 37 percent and 72 percent fewer people are estimated to be at risk from floods city-wide, using conventional areal weighting of census data, and centroid-containment selection, respectively. Undercounting of impacted population could have serious implications for emergency management and disaster planning. Ethnic/racial populations are also spatially disaggregated to determine any environmental justice impacts with flood risk. Minorities are disproportionately undercounted using traditional methods. Underestimating more vulnerable sub-populations impairs preparedness and relief efforts. PMID:20047020
Memantine inhibits β-amyloid aggregation and disassembles preformed β-amyloid aggregates.
Takahashi-Ito, Kaori; Makino, Mitsuhiro; Okado, Keiko; Tomita, Taisuke
2017-11-04
Memantine, an uncompetitive glutamatergic N-methyl-d-aspartate (NMDA) receptor antagonist, is widely used as a medication for the treatment of Alzheimer's disease (AD). We previously reported that chronic treatment of AD with memantine reduces the amount of insoluble β-amyloid (Aβ) and soluble Aβ oligomers in animal models of AD. The mechanisms by which memantine reduces Aβ levels in the brain were evaluated by determining the effect of memantine on Aβ aggregation using thioflavin T and transmission electron microscopy. Memantine inhibited the formation of Aβ(1-42) aggregates in a concentration-dependent manner, whereas amantadine, a structurally similar compound, did not affect Aβ aggregation at the same concentrations. Furthermore, memantine inhibited the formation of different types of Aβ aggregates, including Aβs carrying familial AD mutations, and disaggregated preformed Aβ(1-42) fibrils. These results suggest that the inhibition of Aβ aggregation and induction of Aβ disaggregation may be involved in the mechanisms by which memantine reduces Aβ deposition in the brain. Copyright © 2017 Elsevier Inc. All rights reserved.
In-Trail Procedure (ITP) Algorithm Design
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.; Siminiceanu, Radu I.
2007-01-01
The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.
Integrated approach for automatic target recognition using a network of collaborative sensors.
Mahalanobis, Abhijit; Van Nevel, Alan
2006-10-01
We introduce what is believed to be a novel concept by which several sensors with automatic target recognition (ATR) capability collaborate to recognize objects. Such an approach would be suitable for netted systems in which the sensors and platforms can coordinate to optimize end-to-end performance. We use correlation filtering techniques to facilitate the development of the concept, although other ATR algorithms may be easily substituted. Essentially, a self-configuring geometry of netted platforms is proposed that positions the sensors optimally with respect to each other, and takes into account the interactions among the sensor, the recognition algorithms, and the classes of the objects to be recognized. We show how such a paradigm optimizes overall performance, and illustrate the collaborative ATR scheme for recognizing targets in synthetic aperture radar imagery by using viewing position as a sensor parameter.
Network and data security design for telemedicine applications.
Makris, L; Argiriou, N; Strintzis, M G
1997-01-01
The maturing of telecommunication technologies has ushered in a whole new era of applications and services in the health care environment. Teleworking, teleconsultation, mutlimedia conferencing and medical data distribution are rapidly becoming commonplace in clinical practice. As a result, a set of problems arises, concerning data confidentiality and integrity. Public computer networks, such as the emerging ISDN technology, are vulnerable to eavesdropping. Therefore it is important for telemedicine applications to employ end-to-end encryption mechanisms securing the data channel from unauthorized access of modification. We propose a network access and encryption system that is both economical and easily implemented for integration in developing or existing applications, using well-known and thoroughly tested encryption algorithms. Public-key cryptography is used for session-key exchange, while symmetric algorithms are used for bulk encryption. Mechanisms for session-key generation and exchange are also provided.
2012-01-01
Background Structured association mapping is proving to be a powerful strategy to find genetic polymorphisms associated with disease. However, these algorithms are often distributed as command line implementations that require expertise and effort to customize and put into practice. Because of the difficulty required to use these cutting-edge techniques, geneticists often revert to simpler, less powerful methods. Results To make structured association mapping more accessible to geneticists, we have developed an automatic processing system called Auto-SAM. Auto-SAM enables geneticists to run structured association mapping algorithms automatically, using parallelization. Auto-SAM includes algorithms to discover gene-networks and find population structure. Auto-SAM can also run popular association mapping algorithms, in addition to five structured association mapping algorithms. Conclusions Auto-SAM is available through GenAMap, a front-end desktop visualization tool. GenAMap and Auto-SAM are implemented in JAVA; binaries for GenAMap can be downloaded from http://sailing.cs.cmu.edu/genamap. PMID:22471660
Using food intake records to estimate compliance with the Eatwell Plate dietary guidelines.
Whybrow, S; Macdiarmid, J I; Craig, L C A; Clark, H; McNeill, G
2016-04-01
The UK Eatwell Plate is consumer based advice recommending the proportions of five food groups for a balanced diet: starchy foods, fruit and vegetables, dairy foods, nondairy sources of protein and foods and drinks high in fat or sugar. Many foods comprise ingredients from several food groups and consumers need to consider how these fit with the proportions of the Eatwell Plate. This involves disaggregating composite dishes into proportions of individual food components. The present study aimed to match the diets of adults in Scotland to the Eatwell Plate dietary recommendations and to describe the assumptions and methodological issues associated with estimating Eatwell Plate proportions from dietary records. Foods from weighed intake records of 161 females and 151 males were assigned to a single Eatwell group based on the main ingredient for composite foods, and the overall Eatwell Plate proportions of each subject's diet were calculated. Food group proportions were then recalculated after disaggregating composite foods. The fruit and vegetables and starchy food groups consumed were significantly lower than recommended in the Eatwell Plate, whereas the proportions of the protein and foods high in fat or sugar were significantly higher. Failing to disaggregate composite foods gave an inaccurate estimate of the food group composition of the diet. Estimating Eatwell Plate proportions from dietary records is not straightforward, and is reliant on methodological assumptions. These need to be standardised and disseminated to ensure consistent analysis. © 2015 The British Dietetic Association Ltd.
Automated Algorithm for J-Tpeak and Tpeak-Tend Assessment of Drug-Induced Proarrhythmia Risk
Johannesen, Lars; Vicente, Jose; Hosseini, Meisam; ...
2016-12-30
Prolongation of the heart rate corrected QT (QTc) interval is a sensitive marker of torsade de pointes risk; however it is not specific as QTc prolonging drugs that block inward currents are often not associated with torsade. Recent work demonstrated that separate analysis of the heart rate corrected J-T peakc (J-T peakc) and T peak-T end intervals can identify QTc prolonging drugs with inward current block and is being proposed as a part of a new cardiac safety paradigm for new drugs (the “CiPA” initiative). In this work, we describe an automated measurement methodology for assessment of the J-T peakcmore » and T peak-T end intervals using the vector magnitude lead. The automated measurement methodology was developed using data from one clinical trial and was evaluated using independent data from a second clinical trial. Comparison between the automated and the prior semi-automated measurements shows that the automated algorithm reproduces the semi-automated measurements with a mean difference of single-deltas <1 ms and no difference in intra-time point variability (p for all > 0.39). In addition, the time-profile of the baseline and placebo-adjusted changes are within 1 ms for 63% of the time-points (86% within 2 ms). Importantly, the automated results lead to the same conclusions about the electrophysiological mechanisms of the studied drugs. We have developed an automated algorithm for assessment of J-T peakc and T peak-T end intervals that can be applied in clinical drug trials. Under the CiPA initiative this ECG assessment would determine if there are unexpected ion channel effects in humans compared to preclinical studies. In conclusion, the algorithm is being released as open-source software.« less
Automated Algorithm for J-Tpeak and Tpeak-Tend Assessment of Drug-Induced Proarrhythmia Risk
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johannesen, Lars; Vicente, Jose; Hosseini, Meisam
Prolongation of the heart rate corrected QT (QTc) interval is a sensitive marker of torsade de pointes risk; however it is not specific as QTc prolonging drugs that block inward currents are often not associated with torsade. Recent work demonstrated that separate analysis of the heart rate corrected J-T peakc (J-T peakc) and T peak-T end intervals can identify QTc prolonging drugs with inward current block and is being proposed as a part of a new cardiac safety paradigm for new drugs (the “CiPA” initiative). In this work, we describe an automated measurement methodology for assessment of the J-T peakcmore » and T peak-T end intervals using the vector magnitude lead. The automated measurement methodology was developed using data from one clinical trial and was evaluated using independent data from a second clinical trial. Comparison between the automated and the prior semi-automated measurements shows that the automated algorithm reproduces the semi-automated measurements with a mean difference of single-deltas <1 ms and no difference in intra-time point variability (p for all > 0.39). In addition, the time-profile of the baseline and placebo-adjusted changes are within 1 ms for 63% of the time-points (86% within 2 ms). Importantly, the automated results lead to the same conclusions about the electrophysiological mechanisms of the studied drugs. We have developed an automated algorithm for assessment of J-T peakc and T peak-T end intervals that can be applied in clinical drug trials. Under the CiPA initiative this ECG assessment would determine if there are unexpected ion channel effects in humans compared to preclinical studies. In conclusion, the algorithm is being released as open-source software.« less
Pukala, Jason; Meeks, Sanford L; Staton, Robert J; Bova, Frank J; Mañon, Rafael R; Langen, Katja M
2013-11-01
Deformable image registration (DIR) is being used increasingly in various clinical applications. However, the underlying uncertainties of DIR are not well-understood and a comprehensive methodology has not been developed for assessing a range of interfraction anatomic changes during head and neck cancer radiotherapy. This study describes the development of a library of clinically relevant virtual phantoms for the purpose of aiding clinicians in the QA of DIR software. These phantoms will also be available to the community for the independent study and comparison of other DIR algorithms and processes. Each phantom was derived from a pair of kVCT volumetric image sets. The first images were acquired of head and neck cancer patients prior to the start-of-treatment and the second were acquired near the end-of-treatment. A research algorithm was used to autosegment and deform the start-of-treatment (SOT) images according to a biomechanical model. This algorithm allowed the user to adjust the head position, mandible position, and weight loss in the neck region of the SOT images to resemble the end-of-treatment (EOT) images. A human-guided thin-plate splines algorithm was then used to iteratively apply further deformations to the images with the objective of matching the EOT anatomy as closely as possible. The deformations from each algorithm were combined into a single deformation vector field (DVF) and a simulated end-of-treatment (SEOT) image dataset was generated from that DVF. Artificial noise was added to the SEOT images and these images, along with the original SOT images, created a virtual phantom where the underlying "ground-truth" DVF is known. Images from ten patients were deformed in this fashion to create ten clinically relevant virtual phantoms. The virtual phantoms were evaluated to identify unrealistic DVFs using the normalized cross correlation (NCC) and the determinant of the Jacobian matrix. A commercial deformation algorithm was applied to the virtual phantoms to show how they may be used to generate estimates of DIR uncertainty. The NCC showed that the simulated phantom images had greater similarity to the actual EOT images than the images from which they were derived, supporting the clinical relevance of the synthetic deformation maps. Calculation of the Jacobian of the "ground-truth" DVFs resulted in only positive values. As an example, mean error statistics are presented for all phantoms for the brainstem, cord, mandible, left parotid, and right parotid. It is essential that DIR algorithms be evaluated using a range of possible clinical scenarios for each treatment site. This work introduces a library of virtual phantoms intended to resemble real cases for interfraction head and neck DIR that may be used to estimate and compare the uncertainty of any DIR algorithm.
2012-09-01
interpreting the state vector as the health indicator and a threshold is used on this variable in order to compute EOL (end-of-life) and RUL. Here, we...End-of-life ( EOL ) would match the true spread and would not change from one experiment to another. This is, however, in practice impossible to achieve
Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun
2011-01-01
In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm. PMID:22319408
Providing end-to-end QoS for multimedia applications in 3G wireless networks
NASA Astrophysics Data System (ADS)
Guo, Katherine; Rangarajan, Samapth; Siddiqui, M. A.; Paul, Sanjoy
2003-11-01
As the usage of wireless packet data services increases, wireless carriers today are faced with the challenge of offering multimedia applications with QoS requirements within current 3G data networks. End-to-end QoS requires support at the application, network, link and medium access control (MAC) layers. We discuss existing CDMA2000 network architecture and show its shortcomings that prevent supporting multiple classes of traffic at the Radio Access Network (RAN). We then propose changes in RAN within the standards framework that enable support for multiple traffic classes. In addition, we discuss how Session Initiation Protocol (SIP) can be augmented with QoS signaling for supporting end-to-end QoS. We also review state of the art scheduling algorithms at the base station and provide possible extensions to these algorithms to support different classes of traffic as well as different classes of users.
Soil moisture retrieval at regional scale from AMSR2 data (Conference Presentation)
NASA Astrophysics Data System (ADS)
Paloscia, Simonetta; Santi, Emanuele; Pettinato, Simone; Brocca, Luca; Ciabatta, Luca
2016-10-01
The aim of this work is to exploit the potential of AMSR2 for hydrological applications on a regional scale and in heterogeneous environments characterised by different surface covers at subpixel resolution. The soil moisture content (SMC) estimated from Advanced Microwave Scanning Radiometer 2 (AMSR2) through the ANN-based "HydroAlgo" algorithm is firstly compared with the outputs of the Soil Water Balance hydrological model (SWBM). The comparison is performed over Italy, by considering all the available overpasses of AMSR2, since July 2012. The SMC generated by HydroAlgo is then considered as input for generating a rainfall product through the SM2RAIN algorithm. The comparison between observed and estimated rainfall in central Italy provided satisfactory results with a substantial room for improvement. In this work, the ANN "HydroAlgo" algorithm [1], which was originally developed for AMSR-E, was adapted and re-trained for AMSR2, accounting for the two C band channels provided by this new sensor. The disaggregation technique implemented in HydroAlgo [2], devoted to the improvement of ground resolution, made this algorithm particularly suitable for the application to such a heterogeneous environment. The algorithm allows obtaining a SMC product with enhanced spatial resolution (0.1°), which is more suitable for hydrological applications. The AMSR2 derived SMC is compared with simulated data obtained from the application of a well-established soil water balance model [3]. The training and test of the algorithm are carried out on a test area in central Italy, while the entire Italy is considered for the validation. The last step of the activity is the use of the HydroAlgo SMC into the SM2RAIN algorithm [4], in order to exploit the potential contribution of this product at enhanced resolution for rainfall estimation. [1] E. Santi, S. Pettinato, S. Paloscia, P. Pampaloni, G. Macelloni, and M. Brogioni (2012), "An algorithm for generating soil moisture and snow depth maps from microwave spaceborne radiometers: HydroAlgo", Hydrology and Earth System Sciences, 16, pp. 3659-3676, doi:10.5194/hess-16-3659-2012. [2] E. Santi (2010), "An application of SFIM technique to enhance the spatial resolution of microwave radiometers", Intern. J. Remote Sens., vol. 31, 9, pp. 2419-2428. [3] L. Brocca, S. Camici, F. Melone, T. Moramarco, J. Martinez-Fernandez, J.-F. Didon-Lescot, R. Morbidelli (2014), "Improving the representation of soil moisture by using a semi-analytical infiltration model", Hydrological Processes, 28(4), pp. 2103-2115, doi:10.1002/hyp.9766. [4] Brocca, L., Ciabatta, L., Massari, C., Moramarco, T., Hahn, S., Hasenauer, S., Kidd, R., Dorigo, W., Wagner, W., Levizzani, V. (2014). Soil as a natural rain gauge: estimating global rainfall from satellite soil moisture data. Journal of Geophysical Research, 119(9), 5128-5141, doi:10.1002/2014JD021489.
Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin
2015-10-19
The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.
A complex magma mixing origin for rocks erupted in 1915, Lassen Peak, California
Clynne, M.A.
1999-01-01
The eruption of Lassen Peak in May 1915 produced four volcanic rock types within 3 days, and in the following order: (1) hybrid black dacite lava containing (2) undercooled andesitic inclusions, (3) compositionally banded pumice with dark andesite and light dacite bands, and (4) unbanded light dacite. All types represent stages of a complex mixing process between basaltic andesite and dacite that was interrupted by the eruption. They contain disequilibrium phenocryst assemblages characterized by the co-existence of magnesian olivine and quartz and by reacted and unreacted phenocrysts derived from the dacite. The petrography and crystal chemistry of the phenocrysts and the variation in rock compositions indicate that basaltic andesite intruded dacite magma and partially hybridized with it. Phenocrysts from the dacite magma were reacted. Cooling, cyrstallization, and vesiculation of the hybrid andesite magma converted it to a layer of mafic foam. The decreased density of the andesite magma destabilized and disrupted the foam. Blobs of foam rose into and were further cooled by the overlying dacite magma, forming the andesitic inclusions. Disaggregation of andesitic inclusions in the host dacite produced the black dacite and light dacite magmas. Formation of foam was a dynamic process. Removal of foam propagated the foam layer downward into the hybrid andesite magma. Eventually the thermal and compositional contrasts between the hybrid andesite and black dacite magmas were reduced. Then, they mixed directly, forming the dark andesite magma. About 40-50% andesitic inclusions were disaggregated into the host dacite to produce the hybrid black dacite. Thus, disaggregation of inclusions into small fragments and individual crystals can be an efficient magma-mixing process. Disaggregation of undercooled inclusions carrying reacted host-magma phenocrysts produces co-existing reacted and unreacted phenocrysts populations.
Prynne, C J; Wagemakers, J J M F; Stephen, A M; Wadsworth, M E J
2009-05-01
The aim of the study was to quantify more precisely the meat intake of a cohort of adults in the UK by disaggregating composite meat dishes. Subjects were members of the Medical Research Council National Survey of Health and Development, 1946 birth cohort. Five-day diaries were collected from 2256 men and women in 1989 and 1772 men and women in 1999. From the details provided, composite meat dishes were broken down into their constituent parts and the meat fraction was added to meat portions only. Meat intake was classified as red meat, processed meat and poultry. Meat consumption without disaggregation of meat dishes resulted in a mean overestimation of 50% in men and 33% in women. Red meat consumption fell between 1989 and 1999 from 51.7 to 41.5 g per day in men and 35.7 to 30.1 g per day in women. Poultry consumption rose from 21.6 to 32.2 g per day in men and 18.2 to 29.4 g per day in women. Re-calculating red meat intakes resulted in the percentage of subjects in 1999 consuming more than the recommendation of the World Cancer Research Fund falling from 30 to 12%. Increasing consumption of red and processed meat was associated with increased intakes of energy, fat, haem iron, zinc and vitamin B(12), and lower intake of fibre. Increased sodium intake was associated with increased consumption of processed meat. Disaggregation of meat dishes provided a more precise estimate of meat consumption. The quantity of red or processed meat in the diet was reflected in the nutrient content of the entire diet.
Prynne, Celia J.; Wagemakers, Jessie J.M.F.; Stephen, Alison M.; Wadsworth, Michael E.J.
2009-01-01
Objectives The aim of the study was to quantify more precisely the meat intake of a cohort of adults in the UK by disaggregating composite meat dishes. Subjects/Methods Subjects were members of the MRC National Survey of Health and Development, 1946 birth cohort. Five-day diaries were collected from 2256 men and women in 1989 and 1772 men and women in 1999. From the details provided, composite meat dishes were broken down into their constituent parts and the meat fraction added to meat only portions. Meat intake was classified as red meat, processed meat and poultry. Results Meat consumption without disaggregation of meat dishes resulted in a mean over-estimation of 50% in men and 33% in women. Red meat consumption fell between 1989 and 1999 from 51.7 to 41.5g/day in men and 35.7 to 30.1g/day in women. Poultry consumption rose from 21.6 to 32.2g./day in men and 18.2 to 29.4 g/day in women. Re-calculating red meat intakes resulted in the percentage of subjects in 1999 consuming more the recommendation of the World Cancer Research Fund falling from 30% to 12 %. Increasing consumption of red and processed meat was associated with increased intakes of energy, fat, haem iron, zinc and vitamin B12 and lower intake of fibre. Increased sodium intake was associated with increased consumption of processed meat. Conclusions Disaggregation of meat dishes provided a more precise estimate of meat consumption. The quantity of red or processed meat in the diet was reflected in the nutrient content of the entire diet. PMID:18285805
Zhang, Jiayong; Zhang, Hongwu; Ye, Hongfei; Zheng, Yonggang
2016-09-07
A free-end adaptive nudged elastic band (FEA-NEB) method is presented for finding transition states on minimum energy paths, where the energy barrier is very narrow compared to the whole paths. The previously proposed free-end nudged elastic band method may suffer from convergence problems because of the kinks arising on the elastic band if the initial elastic band is far from the minimum energy path and weak springs are adopted. We analyze the origin of the formation of kinks and present an improved free-end algorithm to avoid the convergence problem. Moreover, by coupling the improved free-end algorithm and an adaptive strategy, we develop a FEA-NEB method to accurately locate the transition state with the elastic band cut off repeatedly and the density of images near the transition state increased. Several representative numerical examples, including the dislocation nucleation in a penta-twinned nanowire, the twin boundary migration under a shear stress, and the cross-slip of screw dislocation in face-centered cubic metals, are investigated by using the FEA-NEB method. Numerical results demonstrate both the stability and efficiency of the proposed method.
Space station image captures a red tide ciliate bloom at high spectral and spatial resolution.
Dierssen, Heidi; McManus, George B; Chlus, Adam; Qiu, Dajun; Gao, Bo-Cai; Lin, Senjie
2015-12-01
Mesodinium rubrum is a globally distributed nontoxic ciliate that is known to produce intense red-colored blooms using enslaved chloroplasts from its algal prey. Although frequent enough to have been observed by Darwin, blooms of M. rubrum are notoriously difficult to quantify because M. rubrum can aggregate into massive clouds of rusty-red water in a very short time due to its high growth rates and rapid swimming behavior and can disaggregate just as quickly by vertical or horizontal dispersion. A September 2012 hyperspectral image from the Hyperspectral Imager for the Coastal Ocean sensor aboard the International Space Station captured a dense red tide of M. rubrum (10(6) cells per liter) in surface waters of western Long Island Sound. Genetic data confirmed the identity of the chloroplast as a cryptophyte that was actively photosynthesizing. Microscopy indicated extremely high abundance of its yellow fluorescing signature pigment phycoerythrin. Spectral absorption and fluorescence features were related to ancillary photosynthetic pigments unique to this organism that cannot be observed with traditional satellites. Cell abundance was estimated at a resolution of 100 m using an algorithm based on the distinctive yellow fluorescence of phycoerythrin. Future development of hyperspectral satellites will allow for better enumeration of bloom-forming coastal plankton, the associated physical mechanisms, and contributions to marine productivity.
Space station image captures a red tide ciliate bloom at high spectral and spatial resolution
Dierssen, Heidi; McManus, George B.; Chlus, Adam; Qiu, Dajun; Gao, Bo-Cai; Lin, Senjie
2015-01-01
Mesodinium rubrum is a globally distributed nontoxic ciliate that is known to produce intense red-colored blooms using enslaved chloroplasts from its algal prey. Although frequent enough to have been observed by Darwin, blooms of M. rubrum are notoriously difficult to quantify because M. rubrum can aggregate into massive clouds of rusty-red water in a very short time due to its high growth rates and rapid swimming behavior and can disaggregate just as quickly by vertical or horizontal dispersion. A September 2012 hyperspectral image from the Hyperspectral Imager for the Coastal Ocean sensor aboard the International Space Station captured a dense red tide of M. rubrum (106 cells per liter) in surface waters of western Long Island Sound. Genetic data confirmed the identity of the chloroplast as a cryptophyte that was actively photosynthesizing. Microscopy indicated extremely high abundance of its yellow fluorescing signature pigment phycoerythrin. Spectral absorption and fluorescence features were related to ancillary photosynthetic pigments unique to this organism that cannot be observed with traditional satellites. Cell abundance was estimated at a resolution of 100 m using an algorithm based on the distinctive yellow fluorescence of phycoerythrin. Future development of hyperspectral satellites will allow for better enumeration of bloom-forming coastal plankton, the associated physical mechanisms, and contributions to marine productivity. PMID:26627232
Parallel Vision Algorithm Design and Implementation 1988 End of Year Report
1989-08-01
as a local operation, the provided C code used raster order processing to speed up execution time. This made it impossible to implement the code using...Apply, which does not allow the programmer to take advantage of raster order processing . Therefore, the 5x5 median filter algorithm was a straight...possible to exploit raster- order processing in W2, giving greater efficiency. The first advantage is the reason that connected components and the Hough
Gesture-Based Controls for Robots: Overview and Implications for Use by Soldiers
2016-07-01
to go somewhere but you did not say where”), (Kennedy et al. 2007; Perzanowski et al 2000a, 2000b). Many efforts are currently focused on developing...start/end of a gesture. They reported a 98% accuracy using a modified handwriting recognition statistical algorithm. The same algorithm was tested...to the device (light switch, music player) and saying “lights on” or “volume up” (Wilson and Shafer 2003). The Nintendo Wii remote controller has
Zwanenburg, Alex; Andriessen, Peter; Jellema, Reint K; Niemarkt, Hendrik J; Wolfs, Tim G A M; Kramer, Boris W; Delhaas, Tammo
2015-03-01
Seizures below one minute in duration are difficult to assess correctly using seizure detection algorithms. We aimed to improve neonatal detection algorithm performance for short seizures through the use of trend templates for seizure onset and end. Bipolar EEG were recorded within a transiently asphyxiated ovine model at 0.7 gestational age, a common experimental model for studying brain development in humans of 30-34 weeks of gestation. Transient asphyxia led to electrographic seizures within 6-8 h. A total of 3159 seizures, 2386 shorter than one minute, were annotated in 1976 h-long EEG recordings from 17 foetal lambs. To capture EEG characteristics, five features, sensitive to seizures, were calculated and used to derive trend information. Feature values and trend information were used as input for support vector machine classification and subsequently post-processed. Performance metrics, calculated after post-processing, were compared between analyses with and without employing trend information. Detector performance was assessed after five-fold cross-validation conducted ten times with random splits. The use of trend templates for seizure onset and end in a neonatal seizure detection algorithm significantly improves the correct detection of short seizures using two-channel EEG recordings from 54.3% (52.6-56.1) to 59.5% (58.5-59.9) at FDR 2.0 (median (range); p < 0.001, Wilcoxon signed rank test). Using trend templates might therefore aid in detection of short seizures by EEG monitoring at the NICU.
[Algorithms of artificial neural networks--practical application in medical science].
Stefaniak, Bogusław; Cholewiński, Witold; Tarkowska, Anna
2005-12-01
Artificial Neural Networks (ANN) may be a tool alternative and complementary to typical statistical analysis. However, in spite of many computer applications of various ANN algorithms ready for use, artificial intelligence is relatively rarely applied to data processing. This paper presents practical aspects of scientific application of ANN in medicine using widely available algorithms. Several main steps of analysis with ANN were discussed starting from material selection and dividing it into groups, to the quality assessment of obtained results at the end. The most frequent, typical reasons for errors as well as the comparison of ANN method to the modeling by regression analysis were also described.
NASA Astrophysics Data System (ADS)
Ozdogan, M.; Serrat-Capdevila, A.; Anderson, M. C.
2017-12-01
Despite increasing scarcity of freshwater resources, there is dearth of spatially explicit information on irrigation water consumption through evapotranspiration, particularly in semi-arid and arid geographies. Remote sensing, either alone or in combination with ground surveys, is increasingly being used for irrigation water management by quantifying evaporative losses at the farm level. Increased availability of observations, sophisticated algorithms, and access to cloud-based computing is also helping this effort. This presentation will focus on crop-specific evapotranspiration estimates at the farm level derived from remote sensing in a number of water-scarce regions of the world. The work is part of a larger effort to quantify irrigation water use and improve use efficiencies associated with several World Bank projects. Examples will be drawn from India, where groundwater based irrigation withdrawals are monitored with the help of crop type mapping and evapotranspiration estimates from remote sensing. Another example will be provided from a northern irrigation district in Mexico, where remote sensing is used for detailed water accounting at the farm level. These locations exemplify the success stories in irrigation water management with the help of remote sensing with the hope that spatially disaggregated information on evapotranspiration can be used as inputs for various water management decisions as well as for better water allocation strategies in many other water scarce regions.
Quantitative Electron Probe Microanalysis: State of the Art
NASA Technical Reports Server (NTRS)
Carpernter, P. K.
2005-01-01
Quantitative electron-probe microanalysis (EPMA) has improved due to better instrument design and X-ray correction methods. Design improvement of the electron column and X-ray spectrometer has resulted in measurement precision that exceeds analytical accuracy. Wavelength-dispersive spectrometer (WDS) have layered-dispersive diffraction crystals with improved light-element sensitivity. Newer energy-dispersive spectrometers (EDS) have Si-drift detector elements, thin window designs, and digital processing electronics with X-ray throughput approaching that of WDS Systems. Using these systems, digital X-ray mapping coupled with spectrum imaging is a powerful compositional mapping tool. Improvements in analytical accuracy are due to better X-ray correction algorithms, mass absorption coefficient data sets,and analysis method for complex geometries. ZAF algorithms have ban superceded by Phi(pz) algorithms that better model the depth distribution of primary X-ray production. Complex thin film and particle geometries are treated using Phi(pz) algorithms, end results agree well with Monte Carlo simulations. For geological materials, X-ray absorption dominates the corretions end depends on the accuracy of mass absorption coefficient (MAC) data sets. However, few MACs have been experimentally measured, and the use of fitted coefficients continues due to general success of the analytical technique. A polynomial formulation of the Bence-Albec alpha-factor technique, calibrated using Phi(pz) algorithms, is used to critically evaluate accuracy issues and can be also be used for high 2% relative and is limited by measurement precision for ideal cases, but for many elements the analytical accuracy is unproven. The EPMA technique has improved to the point where it is frequently used instead of the petrogaphic microscope for reconnaissance work. Examples of stagnant research areas are: WDS detector design characterization of calibration standards, and the need for more complete treatment of the continuum X-ray fluorescence correction.
NASA Astrophysics Data System (ADS)
Fernandez-Ugalde, O.; Barré, P.; Hubert, F.; Virto, I.; Chenu, C.; Ferrage, E.; Caner, L.
2012-12-01
Aggregation is a key process for soil functioning as it influences C storage, vulnerability to erosion and water holding capacity. While the influence of soil organic C on aggregation has been documented, much less is known about the role of soil mineralogy. Soils usually contain a mixture of clay minerals with contrasted surface properties, which should result on different abilities of clay minerals to aggregation. We took advantage of the intrinsic mineral heterogeneity of a temperate Luvisol to compare the role of clay minerals (illite, smectite, kaolinite, and mixed-layer illite-smectite) in aggregation. In a first step, grassland and tilled soil samples were fractionated in water in aggregate-size classes according to the hierarchical model of aggregation (Tisdall and Oades, 1982). Clay mineralogy and organic C in the aggregate-size classes were analyzed. The results showed that interstratified minerals containing swelling phases accumulated in aggregated fractions (>2 μm) compared to free clay fractions (<2 μm) in the two land-uses. The accumulation increased from large macro-aggregates (>500 μm) to micro-aggregates (50-250 μm). C concentration and C/N ratio followed the opposite trend. These results constitute a clay mineral-based evidence for the hierarchical model of aggregation, which postulates an increasing importance of the reactivity of clay minerals in the formation of micro-aggregates compared to larger aggregates. In the latter aggregates, formation relies on the physical enmeshment of particles by fungal hyphae, and root and microbial exudates. In a second step, micro-aggregates from the tilled soil samples were submitted to increasingly disaggregating treatments by sonication to evaluate the link between their water stability and clay mineralogy. Micro-aggregates with increasing stability showed an increase of interstratified minerals containing swelling phases and C concentration for low intensities of disaggregation (from 0 to 5 J mL-1). This suggests that swelling phases promote their stability. Swelling phases and organic C decreased for greater intensities of disaggregation. These results and the SEM images taken at different disaggregation intensities indicate that when increasing disaggregation intensity above 5 J mL-1, the recovered material consists on sand particles covered by physical coatings of illite and kaolinite. Our results show that different clay minerals have different contribution to soil aggregation. Swelling phases are especially important for water-stable aggregates formation, whereas illite and kaolinite can either contribute to aggregation or been coated to sand grains in "mineral aggregates", without porosity and organic C protection capability. In conclusion, soils with large proportion of swelling clay minerals have greater potential for carbon storage by occlusion in aggregates and greater resistance to erosion. Tisdall JM, Oades JM (1982) Organic matter and water-stable aggregates in soils. J Soil Sci 62: 141-163.
Changes to Sub-daily Rainfall Patterns in a Future Climate
NASA Astrophysics Data System (ADS)
Westra, S.; Evans, J. P.; Mehrotra, R.; Sharma, A.
2012-12-01
An algorithm is developed for disaggregating daily rainfall into sub-daily rainfall 'fragments' (continuous high temporal-resolution rainfall sequences whose total depth sums to the daily rainfall amount) under a future, warmer climate. The basis of the algorithm is to re-sample sub-daily fragments from the historical record conditional on the total daily rainfall amount and a range of temperature-based atmospheric predictors. The logic is that as the atmosphere warms, future rainfall patterns will be more reflective of historical rainfall patterns which occurred on warmer days at the same location, or at locations which have an atmospheric temperature profile more representative of expected future atmospheric conditions. It was found that the daily to sub-daily scaling relationship varied significantly by season and by location, with rainfall patterns on warmer seasons or at warmer locations typically exhibiting higher rainfall intensity occurring over shorter periods within a day, compared with cooler seasons and locations. Importantly, by regressing against temperature-based atmospheric covariates, this effect was substantially reduced, suggesting that the approach also may be valid when extrapolating to a future climate. An adjusted method of fragments algorithm was then applied to nine stations around Australia, with the results showing that when holding total daily rainfall constant, the maximum intensity of short duration rainfall increased by a median of about 5% per degree for the maximum 6 minute burst, and 3.5% for the maximum one hour burst, whereas the fraction of the day with no rainfall increased by a median of 1.5%. This highlights that a large proportion of the change to the distribution of rainfall is likely to occur at sub-daily timescales, with significant implications for many hydrological systems.
The influence of omniscient technology on cryptography
NASA Astrophysics Data System (ADS)
Huang, Weihong; Li, Jian
2009-07-01
Scholars agree that concurrent algorithms are an interesting new topic in the field of cyberinformatics, and hackers worldwide concur. In fact, few end-users would disagree with the evaluation of architecture. We propose a Bayesian tool for harnessing massive multiplayer online role-playing games (FIRER), which we use to prove that the well-known ubiquitous algorithm for the improvement of wide-area networks by Karthik Lakshminarayanan is in Co-NP.
Real-time handling of existing content sources on a multi-layer display
NASA Astrophysics Data System (ADS)
Singh, Darryl S. K.; Shin, Jung
2013-03-01
A Multi-Layer Display (MLD) consists of two or more imaging planes separated by physical depth where the depth is a key component in creating a glasses-free 3D effect. Its core benefits include being viewable from multiple angles, having full panel resolution for 3D effects with no side effects of nausea or eye-strain. However, typically content must be designed for its optical configuration in foreground and background image pairs. A process was designed to give a consistent 3D effect in a 2-layer MLD from existing stereo video content in real-time. Optimizations to stereo matching algorithms that generate depth maps in real-time were specifically tailored for the optical characteristics and image processing algorithms of a MLD. The end-to-end process included improvements to the Hierarchical Belief Propagation (HBP) stereo matching algorithm, improvements to optical flow and temporal consistency. Imaging algorithms designed for the optical characteristics of a MLD provided some visual compensation for depth map inaccuracies. The result can be demonstrated in a PC environment, displayed on a 22" MLD, used in the casino slot market, with 8mm of panel seperation. Prior to this development, stereo content had not been used to achieve a depth-based 3D effect on a MLD in real-time
MotieGhader, Habib; Gharaghani, Sajjad; Masoudi-Sobhanzadeh, Yosef; Masoudi-Nejad, Ali
2017-01-01
Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as GA, PSO, ACO and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR feature selection are proposed. SGALA algorithm uses advantages of Genetic algorithm and Learning Automata sequentially and the MGALA algorithm uses advantages of Genetic Algorithm and Learning Automata simultaneously. We applied our proposed algorithms to select the minimum possible number of features from three different datasets and also we observed that the MGALA and SGALA algorithms had the best outcome independently and in average compared to other feature selection algorithms. Through comparison of our proposed algorithms, we deduced that the rate of convergence to optimal result in MGALA and SGALA algorithms were better than the rate of GA, ACO, PSO and LA algorithms. In the end, the results of GA, ACO, PSO, LA, SGALA, and MGALA algorithms were applied as the input of LS-SVR model and the results from LS-SVR models showed that the LS-SVR model had more predictive ability with the input from SGALA and MGALA algorithms than the input from all other mentioned algorithms. Therefore, the results have corroborated that not only is the predictive efficiency of proposed algorithms better, but their rate of convergence is also superior to the all other mentioned algorithms. PMID:28979308
MotieGhader, Habib; Gharaghani, Sajjad; Masoudi-Sobhanzadeh, Yosef; Masoudi-Nejad, Ali
2017-01-01
Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as GA, PSO, ACO and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR feature selection are proposed. SGALA algorithm uses advantages of Genetic algorithm and Learning Automata sequentially and the MGALA algorithm uses advantages of Genetic Algorithm and Learning Automata simultaneously. We applied our proposed algorithms to select the minimum possible number of features from three different datasets and also we observed that the MGALA and SGALA algorithms had the best outcome independently and in average compared to other feature selection algorithms. Through comparison of our proposed algorithms, we deduced that the rate of convergence to optimal result in MGALA and SGALA algorithms were better than the rate of GA, ACO, PSO and LA algorithms. In the end, the results of GA, ACO, PSO, LA, SGALA, and MGALA algorithms were applied as the input of LS-SVR model and the results from LS-SVR models showed that the LS-SVR model had more predictive ability with the input from SGALA and MGALA algorithms than the input from all other mentioned algorithms. Therefore, the results have corroborated that not only is the predictive efficiency of proposed algorithms better, but their rate of convergence is also superior to the all other mentioned algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strom, Daniel J.; Joyce, Kevin E.; Maclellan, Jay A.
2012-04-17
In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results are negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable, and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average ofmore » the measurands. Using traditional estimates of each measurement's uncertainty to disaggregate population variability from measurement uncertainty, a PDF of measurands for the population is produced. Then, using Bayes's theorem, the same assumptions, and all the data from the population of individuals, a prior PDF is computed for each individual's measurand. These PDFs are non-negative, and their average is equal to the average of the measurement results for the population. The uncertainty in these Bayesian posterior PDFs is all Berkson with no remaining classical component. The methods are applied to baseline bioassay data from the Hanford site. The data include 90Sr urinalysis measurements on 128 people, 137Cs in vivo measurements on 5,337 people, and 239Pu urinalysis measurements on 3,270 people. The method produces excellent results for the 90Sr and 137Cs measurements, since there are nonzero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239Pu measurements in non-occupationally exposed people because the population average is essentially zero.« less
Methodology for Estimating ton-Miles of Goods Movements for U.S. Freight Mulitimodal Network System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling
2013-01-01
Ton-miles is a commonly used measure of freight transportation output. Estimation of ton-miles in the U.S. transportation system requires freight flow data at disaggregated level (either by link flow, path flows or origin-destination flows between small geographic areas). However, the sheer magnitude of the freight data system as well as industrial confidentiality concerns in Census survey, limit the freight data which is made available to the public. Through the years, the Center for Transportation Analysis (CTA) of the Oak Ridge National Laboratory (ORNL) has been working in the development of comprehensive national and regional freight databases and network flow models.more » One of the main products of this effort is the Freight Analysis Framework (FAF), a public database released by the ORNL. FAF provides to the general public a multidimensional matrix of freight flows (weight and dollar value) on the U.S. transportation system between states, major metropolitan areas, and remainder of states. Recently, the CTA research team has developed a methodology to estimate ton-miles by mode of transportation between the 2007 FAF regions. This paper describes the data disaggregation methodology. The method relies on the estimation of disaggregation factors that are related to measures of production, attractiveness and average shipments distances by mode service. Production and attractiveness of counties are captured by the total employment payroll. Likely mileages for shipments between counties are calculated by using a geographic database, i.e. the CTA multimodal network system. Results of validation experiments demonstrate the validity of the method. Moreover, 2007 FAF ton-miles estimates are consistent with the major freight data programs for rail and water movements.« less
NASA Astrophysics Data System (ADS)
Nanteza, J.; Thomas, B. F.; Mukwaya, P. I.
2017-12-01
The general lack of knowledge about the current rates of water abstraction/use is a challenge to sustainable water resources management in many countries, including Uganda. Estimates of water abstraction/use rates over Uganda, currently available from the FAO are not disaggregated according to source, making it difficult to understand how much is taken out of individual water stores, limiting effective management. Modelling efforts have disaggregated water use rates according to source (i.e. groundwater and surface water). However, over Sub-Saharan Africa countries, these model use estimates are highly uncertain given the scale limitations in applying water use (i.e. point versus regional), thus influencing model calibration/validation. In this study, we utilize data from the water supply atlas project over Uganda to estimate current rates of groundwater abstraction across the country based on location, well type and other relevant information. GIS techniques are employed to demarcate areas served by each water source. These areas are combined with past population distributions and average daily water needed per person to estimate water abstraction/use through time. The results indicate an increase in groundwater use, and isolate regions prone to groundwater depletion where improved management is required to sustainably management groundwater use.
SHARPEN-systematic hierarchical algorithms for rotamers and proteins on an extended network.
Loksha, Ilya V; Maiolo, James R; Hong, Cheng W; Ng, Albert; Snow, Christopher D
2009-04-30
Algorithms for discrete optimization of proteins play a central role in recent advances in protein structure prediction and design. We wish to improve the resources available for computational biologists to rapidly prototype such algorithms and to easily scale these algorithms to many processors. To that end, we describe the implementation and use of two new open source resources, citing potential benefits over existing software. We discuss CHOMP, a new object-oriented library for macromolecular optimization, and SHARPEN, a framework for scaling CHOMP scripts to many computers. These tools allow users to develop new algorithms for a variety of applications including protein repacking, protein-protein docking, loop rebuilding, or homology model remediation. Particular care was taken to allow modular energy function design; protein conformations may currently be scored using either the OPLSaa molecular mechanical energy function or an all-atom semiempirical energy function employed by Rosetta. (c) 2009 Wiley Periodicals, Inc.
USDA-ARS?s Scientific Manuscript database
A density-independent algorithm for moisture content determination in sawdust, based on a one-port reflection measurement technique is proposed for the first time. Performance of this algorithm is demonstrated through measurement of the dielectric properties of sawdust with an open-ended haft-mode s...
NASA Astrophysics Data System (ADS)
Miletto, Michela; Greco, Francesca; Belfiore, Elena
2017-04-01
Global climate change is expected to exacerbate current and future stresses on water resources from population growth and land use, and increase the frequency and severity of droughts and floods. Women are more vulnerable to the effects of climate change than men not only because they constitute the majority of the world's poor but also because they are more dependent for their livelihood on natural resources that are threatened by climate change. In addition, social, economic and political barriers often limit their coping capacity. Women play a key role in the provision, management and safeguarding of water, nonetheless, gender inequality in water management framework persists around the globe. Sharp data are essential to inform decisions and support effective policies. Disaggregating water data by sex is crucial to analyse gendered roles in the water realm and inform gender sensitive water policies in light of the global commitments to gender equality of Agenda 2030. In view of this scenario, WWAP has created an innovative toolkit for sex-disaggregated water data collection, as a result of a participatory work of more than 35 experts, part of the WWAP Working Group on Sex-Disaggregated Indicators (http://www.unesco.org/new/en/natural-sciences/environment/water/wwap/water-and-gender/un-wwap-working-group-on-gender-disaggregated-indicators/#c1430774). The WWAP toolkit contains four tools: the methodology (Seager J. WWAP UNESCO, 2015), set of key indicators, the guideline (Pangare V.,WWAP UNESCO, 2015) and a questionnaire for field survey. WWAP key gender-sensitive indicators address water resources management, aspects of water quality and agricultural uses, water resources governance and management, and investigate unaccounted labour in according to gender and age. Managing water resources is key for climate adaptation. Women are particularly sensitive to water quality and the health of water-dependent ecosystems, often source of food and job opportunities. Extreme climatic events like floods and droughts could severely impact the status of water resources and dependent ecosystems and the sustainability of household activities and local economies, given the absence of gender sensitive preparedness to hydrological and meteorological extremes. This paper describes the application of the WWAP Gender Toolkit to water data assessments in semi-arid region of the Stampriet transboundary aquifer shared by Botwana, Namibia and South Africa, in the framework of the "Groundwater Resources Governance in Transboundary Aquifers" - GGRETA project, led and executed by the UNESCO International Hydrological Programme (IHP), and financed by the Swiss Agency for Development and Cooperation (SDC). The tests in the field proved the reliability of WWAP gender toolkit and selected gender-sensitive indicators in the freshwater assessment. Further analysis could inform on the gaps and needs for climate adaptation practices. Field data identified socially-determined differences in roles, and confirmed the prevalent role of women in managing freshwater for drinking and sanitation purposes within the household boundaries, while decision-making for water allocation and use (with implications on hydrological risk) for agriculture and livestock purposes, is broadly under men's responsibility.
The emission abatement policy paradox in Australia: evidence from energy-emission nexus.
Ahmed, Khalid; Ozturk, Ilhan
2016-09-01
This paper attempts to investigate the emissions embodied in Australia's economic growth and disaggregate primary energy sources used for electricity production. Using time series data over the period of 1990-2012, the ARDL bounds test approach to cointegration technique is applied to test the long-run association among the underlying variables. The regression results validate the long-run equilibrium relationship among all vectors and confirm that CO2 emissions, economic growth, and disaggregate primary energy consumption impact each other in the long-run path. Afterwards, the long- and short-run analyses are conducted using error correction model. The results show that economic growth, coal, oil, gas, and hydro energy sources have positive and statistically significant impact on CO2 emissions both in long and short run, with an exception of renewables which has negative impact only in the long run. The results conclude that Australia faces wide gap between emission abatement policies and targets. The country still relies on emission intensive fossil fuels (i.e., coal and oil) to meet the indigenous electricity demand.
Ma, Mengmeng; Gao, Nan; Sun, Yuhuan; Du, Xiubo; Ren, Jinsong; Qu, Xiaogang
2018-06-19
Adjustable structure, excellent physiochemical properties, and good biocompatibility render polyoxometalates (POMs) as a suitable drug agent for the treatment of Alzheimer's disease (AD). However, previous works using POMs against AD just focus on the inhibition of amyloid-β (Aβ) monomer aggregation. In consideration that both Aβ fibrils and reactive oxygen species (ROS) are closely associated with clinical development of AD symptoms, it would be more effective if POMs can disaggregate Aβ fibrils and eliminate ROS as well. Herein, a redox-activated near-infrared (NIR) responsive POMs-based nanoplaform (rPOMs@MSNs@copolymer) is developed with high photothermal effect and antioxidant activity. The rPOMs@MSNs@copolymer can generate local hyperthermia to disaggregate Aβ fibrils under NIR laser irradiation because of POMs (rPOMs) with strong NIR absorption. Furthermore, Aβ-induced ROS can be scavenged by the antioxidant activity of rPOMs. To the authors' knowledge, there is no report of using rPOMs for NIR photothermal treatment of AD. This work may promote the development of multifunctional inorganic agents for biomedical applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Rotation invariant fast features for large-scale recognition
NASA Astrophysics Data System (ADS)
Takacs, Gabriel; Chandrasekhar, Vijay; Tsai, Sam; Chen, David; Grzeszczuk, Radek; Girod, Bernd
2012-10-01
We present an end-to-end feature description pipeline which uses a novel interest point detector and Rotation- Invariant Fast Feature (RIFF) descriptors. The proposed RIFF algorithm is 15× faster than SURF1 while producing large-scale retrieval results that are comparable to SIFT.2 Such high-speed features benefit a range of applications from Mobile Augmented Reality (MAR) to web-scale image retrieval and analysis.
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
Myanmar Education: Status, Issues and Challenges.
ERIC Educational Resources Information Center
Tin, Han
2000-01-01
Traces the history and development of the education system of Myanmar. Discusses the need to use disaggregated data in examining Myanmar education because national figures may mask regional differences that are crucial to the planning process. Describes the findings of a national education sector study, and stresses the importance of regional…
Automated Scoring in Context: Rapid Assessment for Placed Students
ERIC Educational Resources Information Center
Klobucar, Andrew; Elliot, Norbert; Deess, Perry; Rudniy, Oleksandr; Joshi, Kamal
2013-01-01
This study investigated the use of automated essay scoring (AES) to identify at-risk students enrolled in a first-year university writing course. An application of AES, the "Criterion"[R] Online Writing Evaluation Service was evaluated through a methodology focusing on construct modelling, response processes, disaggregation, extrapolation,…
An Analysis of Costs in Institutions of Higher Education in England
ERIC Educational Resources Information Center
Johnes, Geraint; Johnes, Jill; Thanassoulis, Emmanuel
2008-01-01
Cost functions are estimated, using random effects and stochastic frontier methods, for English higher education institutions. The article advances on existing literature by employing finer disaggregation by subject, institution type and location, and by introducing consideration of quality effects. Estimates are provided of average incremental…
Disaggregated Effects of Device on Score Comparability
ERIC Educational Resources Information Center
Davis, Laurie; Morrison, Kristin; Kong, Xiaojing; McBride, Yuanyuan
2017-01-01
The use of tablets for large-scale testing programs has transitioned from concept to reality for many state testing programs. This study extended previous research on score comparability between tablets and computers with high school students to compare score distributions across devices for reading, math, and science and to evaluate device…
ESEA Reauthorization: Why Data Matter
ERIC Educational Resources Information Center
Data Quality Campaign, 2015
2015-01-01
The reauthorization of the Elementary and Secondary Education Act (ESEA) provides an opportunity to transform how data are used in education. The 2002 ESEA requirement to disaggregate data and provide them to the public has made it possible to have greater transparency and more accurate measures of academic performance than ever. Congress now has…
Essays on Technology and Forecasting in Macroeconomics
ERIC Educational Resources Information Center
Samuels, Jon Devin
2012-01-01
The three chapters in this dissertation use disaggregated models and data to provide new insights on well-established questions in macroeconomics. In the first chapter, to analyze how productivity impacts the business cycle, I model aggregate production with a production possibility frontier that accommodates sector-and factor-biased productivity.…
Manpower Theory and Policy and the Residual Occupational Elasticity of Substitution.
ERIC Educational Resources Information Center
Rostker, Bernard Daniel
By developing the short-run policy implications of a structurally disaggregated labor market, this study attempts to show that fiscal and manpower policies are complementary means to achieve full employment. Using a constant elasticity of substitution production function, the study demonstrates mathematically that the smaller the residual…
Disaggregating the Effects of Marital Trajectories on Health
ERIC Educational Resources Information Center
Dupre, Matthew E.; Meadows, Sarah O.
2007-01-01
Recent studies linking marital status and health increasingly focus on marital trajectories to examine the relationship from a life course perspective. However, research has been slow to bridge the theoretical concept of a marital trajectory with its measurement. This study uses retrospective and prospective data to model the age-dependent effects…
Jha, Abhinav K.; Kupinski, Matthew A.; Rodríguez, Jeffrey J.; Stephen, Renu M.; Stopeck, Alison T.
2012-01-01
In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both accuracy and precision. We also propose consistency checks for this evaluation technique. PMID:22713231
Visualization for Hyper-Heuristics. Front-End Graphical User Interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroenung, Lauren
Modern society is faced with ever more complex problems, many of which can be formulated as generate-and-test optimization problems. General-purpose optimization algorithms are not well suited for real-world scenarios where many instances of the same problem class need to be repeatedly and efficiently solved because they are not targeted to a particular scenario. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario. While such automated design has great advantages, it can often be difficult to understand exactly how a design was derived and why it should be trusted. This project aims to address thesemore » issues of usability by creating an easy-to-use graphical user interface (GUI) for hyper-heuristics to support practitioners, as well as scientific visualization of the produced automated designs. My contributions to this project are exhibited in the user-facing portion of the developed system and the detailed scientific visualizations created from back-end data.« less
Quality of service routing in wireless ad hoc networks
NASA Astrophysics Data System (ADS)
Sane, Sachin J.; Patcha, Animesh; Mishra, Amitabh
2003-08-01
An efficient routing protocol is essential to guarantee application level quality of service running on wireless ad hoc networks. In this paper we propose a novel routing algorithm that computes a path between a source and a destination by considering several important constraints such as path-life, availability of sufficient energy as well as buffer space in each of the nodes on the path between the source and destination. The algorithm chooses the best path from among the multiples paths that it computes between two endpoints. We consider the use of control packets that run at a priority higher than the data packets in determining the multiple paths. The paper also examines the impact of different schedulers such as weighted fair queuing, and weighted random early detection among others in preserving the QoS level guarantees. Our extensive simulation results indicate that the algorithm improves the overall lifetime of a network, reduces the number of dropped packets, and decreases the end-to-end delay for real-time voice application.
An ultrashort-pulse reconstruction software: GROG, applied to the FLAME laser system
NASA Astrophysics Data System (ADS)
Galletti, Mario
2016-03-01
The GRENOUILLE traces of FLAME Probe line pulses (60mJ, 10mJ after compression, 70fs, 1cm FWHM, 10Hz) were acquired in the FLAME Front End Area (FFEA) at the Laboratori Nazionali di Frascati (LNF), Instituto Nazionale di Fisica Nucleare (INFN). The complete characterization of the laser pulse parameters was made using a new algorithm: GRenouille/FrOG (GROG). A characterization with a commercial algorithm, QUICKFrog, was also made. The temporal and spectral parameters came out to be in great agreement for the two kinds of algorithms. In this experimental campaign the Probe line of FLAME has been completely characterized and it has been showed how GROG, the developed algorithm, works as well as QuickFrog algorithm with this type of pulse class.
Radiometry simulation within the end-to-end simulation tool SENSOR
NASA Astrophysics Data System (ADS)
Wiest, Lorenz; Boerner, Anko
2001-02-01
12 An end-to-end simulation is a valuable tool for sensor system design, development, optimization, testing, and calibration. This contribution describes the radiometry module of the end-to-end simulation tool SENSOR. It features MODTRAN 4.0-based look up tables in conjunction with a cache-based multilinear interpolation algorithm to speed up radiometry calculations. It employs a linear reflectance parameterization to reduce look up table size, considers effects due to the topology of a digital elevation model (surface slope, sky view factor) and uses a reflectance class feature map to assign Lambertian and BRDF reflectance properties to the digital elevation model. The overall consistency of the radiometry part is demonstrated by good agreement between ATCOR 4-retrieved reflectance spectra of a simulated digital image cube and the original reflectance spectra used to simulate this image data cube.
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2017-03-01
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Evaluation of centroiding algorithm error for Nano-JASMINE
NASA Astrophysics Data System (ADS)
Hara, Takuji; Gouda, Naoteru; Yano, Taihei; Yamada, Yoshiyuki
2014-08-01
The Nano-JASMINE mission has been designed to perform absolute astrometric measurements with unprecedented accuracy; the end-of-mission parallax standard error is required to be of the order of 3 milli arc seconds for stars brighter than 7.5 mag in the zw-band(0.6μm-1.0μm) .These requirements set a stringent constraint on the accuracy of the estimation of the location of the stellar image on the CCD for each observation. However each stellar images have individual shape depend on the spectral energy distribution of the star, the CCD properties, and the optics and its associated wave front errors. So it is necessity that the centroiding algorithm performs a high accuracy in any observables. Referring to the study of Gaia, we use LSF fitting method for centroiding algorithm, and investigate systematic error of the algorithm for Nano-JASMINE. Furthermore, we found to improve the algorithm by restricting sample LSF when we use a Principle Component Analysis. We show that centroiding algorithm error decrease after adapted the method.
muBLASTP: database-indexed protein sequence search on multicore CPUs.
Zhang, Jing; Misra, Sanchit; Wang, Hao; Feng, Wu-Chun
2016-11-04
The Basic Local Alignment Search Tool (BLAST) is a fundamental program in the life sciences that searches databases for sequences that are most similar to a query sequence. Currently, the BLAST algorithm utilizes a query-indexed approach. Although many approaches suggest that sequence search with a database index can achieve much higher throughput (e.g., BLAT, SSAHA, and CAFE), they cannot deliver the same level of sensitivity as the query-indexed BLAST, i.e., NCBI BLAST, or they can only support nucleotide sequence search, e.g., MegaBLAST. Due to different challenges and characteristics between query indexing and database indexing, the existing techniques for query-indexed search cannot be used into database indexed search. muBLASTP, a novel database-indexed BLAST for protein sequence search, delivers identical hits returned to NCBI BLAST. On Intel Haswell multicore CPUs, for a single query, the single-threaded muBLASTP achieves up to a 4.41-fold speedup for alignment stages, and up to a 1.75-fold end-to-end speedup over single-threaded NCBI BLAST. For a batch of queries, the multithreaded muBLASTP achieves up to a 5.7-fold speedups for alignment stages, and up to a 4.56-fold end-to-end speedup over multithreaded NCBI BLAST. With a newly designed index structure for protein database and associated optimizations in BLASTP algorithm, we re-factored BLASTP algorithm for modern multicore processors that achieves much higher throughput with acceptable memory footprint for the database index.
ERIC Educational Resources Information Center
Burstein, Marcy; Stanger, Catherine; Dumenci, Levent
2012-01-01
The present study: (1) examined relations between parent psychopathology and adolescent internalizing problems, externalizing problems, and substance use in substance-abusing families; and (2) tested family functioning problems as mediators of these relations. Structural equation modeling was used to estimate the independent effects of parent…
Projecting southern timber supply for multiple products by subregion
Robert C. Abt; Frederick W. Cubbage; Karen L. Abt
2009-01-01
While timber supply modeling has been of importance in the wood-producing regions of the United States for decades, it is only more recently that the technology and data have allowed disaggregation of supply and demand to substate regions, including product specific breakdowns and endogenous land use and plantation changes. Using southwide data and an economic supply...
Arnold, J B; Liow, J S; Schaper, K A; Stern, J J; Sled, J G; Shattuck, D W; Worth, A J; Cohen, M S; Leahy, R M; Mazziotta, J C; Rottenberg, D A
2001-05-01
The desire to correct intensity nonuniformity in magnetic resonance images has led to the proliferation of nonuniformity-correction (NUC) algorithms with different theoretical underpinnings. In order to provide end users with a rational basis for selecting a given algorithm for a specific neuroscientific application, we evaluated the performance of six NUC algorithms. We used simulated and real MRI data volumes, including six repeat scans of the same subject, in order to rank the accuracy, precision, and stability of the nonuniformity corrections. We also compared algorithms using data volumes from different subjects and different (1.5T and 3.0T) MRI scanners in order to relate differences in algorithmic performance to intersubject variability and/or differences in scanner performance. In phantom studies, the correlation of the extracted with the applied nonuniformity was highest in the transaxial (left-to-right) direction and lowest in the axial (top-to-bottom) direction. Two of the six algorithms demonstrated a high degree of stability, as measured by the iterative application of the algorithm to its corrected output. While none of the algorithms performed ideally under all circumstances, locally adaptive methods generally outperformed nonadaptive methods. Copyright 2001 Academic Press.
Hybrid protection algorithms based on game theory in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Guo, Lei; Wu, Jingjing; Hou, Weigang; Liu, Yejun; Zhang, Lincong; Li, Hongming
2011-12-01
With the network size increasing, the optical backbone is divided into multiple domains and each domain has its own network operator and management policy. At the same time, the failures in optical network may lead to a huge data loss since each wavelength carries a lot of traffic. Therefore, the survivability in multi-domain optical network is very important. However, existing survivable algorithms can achieve only the unilateral optimization for profit of either users or network operators. Then, they cannot well find the double-win optimal solution with considering economic factors for both users and network operators. Thus, in this paper we develop the multi-domain network model with involving multiple Quality of Service (QoS) parameters. After presenting the link evaluation approach based on fuzzy mathematics, we propose the game model to find the optimal solution to maximize the user's utility, the network operator's utility, and the joint utility of user and network operator. Since the problem of finding double-win optimal solution is NP-complete, we propose two new hybrid protection algorithms, Intra-domain Sub-path Protection (ISP) algorithm and Inter-domain End-to-end Protection (IEP) algorithm. In ISP and IEP, the hybrid protection means that the intelligent algorithm based on Bacterial Colony Optimization (BCO) and the heuristic algorithm are used to solve the survivability in intra-domain routing and inter-domain routing, respectively. Simulation results show that ISP and IEP have the similar comprehensive utility. In addition, ISP has better resource utilization efficiency, lower blocking probability, and higher network operator's utility, while IEP has better user's utility.
Distilling the Verification Process for Prognostics Algorithms
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai
2013-01-01
The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.
Cluster Based Location-Aided Routing Protocol for Large Scale Mobile Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Wang, Yi; Dong, Liang; Liang, Taotao; Yang, Xinyu; Zhang, Deyun
Routing algorithms with low overhead, stable link and independence of the total number of nodes in the network are essential for the design and operation of the large-scale wireless mobile ad hoc networks (MANET). In this paper, we develop and analyze the Cluster Based Location-Aided Routing Protocol for MANET (C-LAR), a scalable and effective routing algorithm for MANET. C-LAR runs on top of an adaptive cluster cover of the MANET, which can be created and maintained using, for instance, the weight-based distributed algorithm. This algorithm takes into consideration the node degree, mobility, relative distance, battery power and link stability of mobile nodes. The hierarchical structure stabilizes the end-to-end communication paths and improves the networks' scalability such that the routing overhead does not become tremendous in large scale MANET. The clusterheads form a connected virtual backbone in the network, determine the network's topology and stability, and provide an efficient approach to minimizing the flooding traffic during route discovery and speeding up this process as well. Furthermore, it is fascinating and important to investigate how to control the total number of nodes participating in a routing establishment process so as to improve the network layer performance of MANET. C-LAR is to use geographical location information provided by Global Position System to assist routing. The location information of destination node is used to predict a smaller rectangle, isosceles triangle, or circle request zone, which is selected according to the relative location of the source and the destination, that covers the estimated region in which the destination may be located. Thus, instead of searching the route in the entire network blindly, C-LAR confines the route searching space into a much smaller estimated range. Simulation results have shown that C-LAR outperforms other protocols significantly in route set up time, routing overhead, mean delay and packet collision, and simultaneously maintains low average end-to-end delay, high success delivery ratio, low control overhead, as well as low route discovery frequency.
Noise estimation for hyperspectral imagery using spectral unmixing and synthesis
NASA Astrophysics Data System (ADS)
Demirkesen, C.; Leloglu, Ugur M.
2014-10-01
Most hyperspectral image (HSI) processing algorithms assume a signal to noise ratio model in their formulation which makes them dependent on accurate noise estimation. Many techniques have been proposed to estimate the noise. A very comprehensive comparative study on the subject is done by Gao et al. [1]. In a nut-shell, most techniques are based on the idea of calculating standard deviation from assumed-to-be homogenous regions in the image. Some of these algorithms work on a regular grid parameterized with a window size w, while others make use of image segmentation in order to obtain homogenous regions. This study focuses not only to the statistics of the noise but to the estimation of the noise itself. A noise estimation technique motivated from a recent HSI de-noising approach [2] is proposed in this study. The denoising algorithm is based on estimation of the end-members and their fractional abundances using non-negative least squares method. The end-members are extracted using the well-known simplex volume optimization technique called NFINDR after manual selection of number of end-members and the image is reconstructed using the estimated endmembers and abundances. Actually, image de-noising and noise estimation are two sides of the same coin: Once we denoise an image, we can estimate the noise by calculating the difference of the de-noised image and the original noisy image. In this study, the noise is estimated as described above. To assess the accuracy of this method, the methodology in [1] is followed, i.e., synthetic images are created by mixing end-member spectra and noise. Since best performing method for noise estimation was spectral and spatial de-correlation (SSDC) originally proposed in [3], the proposed method is compared to SSDC. The results of the experiments conducted with synthetic HSIs suggest that the proposed noise estimation strategy outperforms the existing techniques in terms of mean and standard deviation of absolute error of the estimated noise. Finally, it is shown that the proposed technique demonstrated a robust behavior to the change of its single parameter, namely the number of end-members.
Planning Assembly Of Large Truss Structures In Outer Space
NASA Technical Reports Server (NTRS)
De Mello, Luiz S. Homem; Desai, Rajiv S.
1992-01-01
Report dicusses developmental algorithm used in systematic planning of sequences of operations in which large truss structures assembled in outer space. Assembly sequence represented by directed graph called "assembly graph", in which each arc represents joining of two parts or subassemblies. Algorithm generates assembly graph, working backward from state of complete assembly to initial state, in which all parts disassembled. Working backward more efficient than working forward because it avoids intermediate dead ends.
Advanced Clinical Decision Support for Transport of the Critically Ill Patient
2013-12-01
algorithms, status asthmaticus and status epilepticus , are to "go live" for use on pediatric critical care transport by the end of October. (Appendices 5...additional algorithms ( status asthmaticus and status epilepticus , Appendices 5 and 6). 8) Plans for validation testing to other transport teams...Practice Guideline Status Epilepticus Clinical Practice Guideline 17 d .s:. ... > 0 Cf :~f t t) ’ U t I .!! ~ .. z ~ i ~ t 0 - : i l
A High Performance Cloud-Based Protein-Ligand Docking Prediction Algorithm
Chen, Jui-Le; Yang, Chu-Sing
2013-01-01
The potential of predicting druggability for a particular disease by integrating biological and computer science technologies has witnessed success in recent years. Although the computer science technologies can be used to reduce the costs of the pharmaceutical research, the computation time of the structure-based protein-ligand docking prediction is still unsatisfied until now. Hence, in this paper, a novel docking prediction algorithm, named fast cloud-based protein-ligand docking prediction algorithm (FCPLDPA), is presented to accelerate the docking prediction algorithm. The proposed algorithm works by leveraging two high-performance operators: (1) the novel migration (information exchange) operator is designed specially for cloud-based environments to reduce the computation time; (2) the efficient operator is aimed at filtering out the worst search directions. Our simulation results illustrate that the proposed method outperforms the other docking algorithms compared in this paper in terms of both the computation time and the quality of the end result. PMID:23762864
Applications of an architecture design and assessment system (ADAS)
NASA Technical Reports Server (NTRS)
Gray, F. Gail; Debrunner, Linda S.; White, Tennis S.
1988-01-01
A new Architecture Design and Assessment System (ADAS) tool package is introduced, and a range of possible applications is illustrated. ADAS was used to evaluate the performance of an advanced fault-tolerant computer architecture in a modern flight control application. Bottlenecks were identified and possible solutions suggested. The tool was also used to inject faults into the architecture and evaluate the synchronization algorithm, and improvements are suggested. Finally, ADAS was used as a front end research tool to aid in the design of reconfiguration algorithms in a distributed array architecture.
An improved rainfall disaggregation technique for GCMs
NASA Astrophysics Data System (ADS)
Onof, C.; Mackay, N. G.; Oh, L.; Wheater, H. S.
1998-08-01
Meteorological models represent rainfall as a mean value for a grid square so that when the latter is large, a disaggregation scheme is required to represent the spatial variability of rainfall. In general circulation models (GCMs) this is based on an assumption of exponentiality of rainfall intensities and a fixed value of areal rainfall coverage, dependent on rainfall type. This paper examines these two assumptions on the basis of U.K. and U.S. radar data. Firstly, the coverage of an area is strongly dependent on its size, and this dependence exhibits a scaling law over a range of sizes. Secondly, the coverage is, of course, dependent on the resolution at which it is measured, although this dependence is weak at high resolutions. Thirdly, the time series of rainfall coverages has a long-tailed autocorrelation function which is comparable to that of the mean areal rainfalls. It is therefore possible to reproduce much of the temporal dependence of coverages by using a regression of the log of the mean rainfall on the log of the coverage. The exponential assumption is satisfactory in many cases but not able to reproduce some of the long-tailed dependence of some intensity distributions. Gamma and lognormal distributions provide a better fit in these cases, but they have their shortcomings and require a second parameter. An improved disaggregation scheme for GCMs is proposed which incorporates the previous findings to allow the coverage to be obtained for any area and any mean rainfall intensity. The parameters required are given and some of their seasonal behavior is analyzed.
Success in Undergraduate Engineering Programs: A Comparative Analysis by Race and Gender
NASA Astrophysics Data System (ADS)
Lord, Susan
2010-03-01
Interest in increasing the number of engineering graduates in the United States and promoting gender equality and diversification of the profession has encouraged considerable research on women and minorities in engineering programs. Drawing on a framework of intersectionality theory, this work recognizes that women of different ethnic backgrounds warrant disaggregated analysis because they do not necessarily share a common experience in engineering education. Using a longitudinal, comprehensive data set of more than 79,000 students who matriculated in engineering at nine universities in the Southeastern United States, this research examines how the six-year graduation rates of engineering students vary by disaggregated combinations of gender and race/ethnicity. Contrary to the popular opinion that women drop out of engineering at higher rates, our results show that Asian, Black, Hispanic, Native American, and White women who matriculate in engineering are as likely as men to graduate in engineering in six years. In fact, Asian, Black, Hispanic, and Native American women engineering matriculants graduate at higher rates than men and there is a small difference for white students. 54 percent of White women engineering matriculants graduate in six-years compared with 53 percent of white men. For male and female engineering matriculants of all races, the most likely destination six years after entering college is graduation within engineering. This work underscores the importance of research disaggregated by race and gender and points to the critical need for more recruitment of women into engineering as the low representation of women in engineering education is primarily a reflection of their low representation at matriculation.
Yamasaki, Takashi; Oohata, Yukiko; Nakamura, Toshiki; Watanabe, Yo-hei
2015-04-10
The ClpB/Hsp104 chaperone solubilizes and reactivates protein aggregates in cooperation with DnaK/Hsp70 and its cofactors. The ClpB/Hsp104 protomer has two AAA+ modules, AAA-1 and AAA-2, and forms a homohexamer. In the hexamer, these modules form a two-tiered ring in which each tier consists of homotypic AAA+ modules. By ATP binding and its hydrolysis at these AAA+ modules, ClpB/Hsp104 exerts the mechanical power required for protein disaggregation. Although ATPase cycle of this chaperone has been studied by several groups, an integrated understanding of this cycle has not been obtained because of the complexity of the mechanism and differences between species. To improve our understanding of the ATPase cycle, we prepared many ordered heterohexamers of ClpB from Thermus thermophilus, in which two subunits having different mutations were cross-linked to each other and arranged alternately and measured their nucleotide binding, ATP hydrolysis, and disaggregation abilities. The results indicated that the ATPase cycle of ClpB proceeded as follows: (i) the 12 AAA+ modules randomly bound ATP, (ii) the binding of four or more ATP to one AAA+ ring was sensed by a conserved Arg residue and converted another AAA+ ring into the ATPase-active form, and (iii) ATP hydrolysis occurred cooperatively in each ring. We also found that cooperative ATP hydrolysis in at least one ring was needed for the disaggregation activity of ClpB. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
Gazard, Billy; Frissa, Souci; Nellums, Laura; Hotopf, Matthew; Hatch, Stephani L.
2015-01-01
Objectives. This study aimed to investigate the associations between migration status and health-related outcomes and to examine whether and how the effect of migration status changes when it is disaggregated by length of residence, first language, reason for migration and combined with ethnicity. Design. A total of 1698 adults were interviewed from 1076 randomly selected households in two South London boroughs. We described the socio-demographic and socio-economic differences between migrants and non-migrants and compared the prevalence of health-related outcomes by migration status, length of residence, first language, reason for migration and migration status within ethnic groups. Unadjusted models and models adjusted for socio-demographic and socio-economic indicators are presented. Results. Migrants were disadvantaged in terms of socio-economic status but few differences were found between migrant and non-migrants regarding health or health service use indicators; migration status was associated with decreased hazardous alcohol use, functional limitations due to poor mental health and not being registered with a general practitioner. Important differences emerged when migration status was disaggregated by length of residence in the UK, first language, reason for migration and intersected with ethnicity. The association between migration status and functional limitations due to poor mental health was only seen in White migrants, migrants whose first language was not English and migrants who had moved to the UK for work or a better life or for asylum or political reasons. There was no association between migration status and self-rated health overall, but Black African migrants had decreased odds for reporting poor health compared to their non-migrant counterparts [odds ratio = 0.15 (0.05–0.48), p < 0.01]. Conclusions. Disaggregating migration status by length of residence, first language and reason for migration as well as intersecting it with ethnicity leads to better understanding of the effect migration status has on health and health service use. PMID:25271468
Alarm systems detect volcanic tremor and earthquake swarms during Redoubt eruption, 2009
NASA Astrophysics Data System (ADS)
Thompson, G.; West, M. E.
2009-12-01
We ran two alarm algorithms on real-time data from Redoubt volcano during the 2009 crisis. The first algorithm was designed to detect escalations in continuous seismicity (tremor). This is implemented within an application called IceWeb which computes reduced displacement, and produces plots of reduced displacement and spectrograms linked to the Alaska Volcano Observatory internal webpage every 10 minutes. Reduced displacement is a measure of the amplitude of volcanic tremor, and is computed by applying a geometrical spreading correction to a displacement seismogram. When the reduced displacement at multiple stations exceeds pre-defined thresholds and there has been a factor of 3 increase in reduced displacement over the previous hour, a tremor alarm is declared. The second algorithm was to designed to detect earthquake swarms. The mean and median event rates are computed every 5 minutes based on the last hour of data from a real-time event catalog. By comparing these with thresholds, three swarm alarm conditions can be declared: a new swarm, an escalation in a swarm, and the end of a swarm. The end of swarm alarm is important as it may mark a transition from swarm to continuous tremor. Alarms from both systems were dispatched using a generic alarm management system which implements a call-down list, allowing observatory scientists to be called in sequence until someone acknowledged the alarm via a confirmation web page. The results of this simple approach are encouraging. The tremor alarm algorithm detected 26 of the 27 explosive eruptions that occurred from 23 March - 4 April. The swarm alarm algorithm detected all five of the main volcanic earthquake swarm episodes which occurred during the Redoubt crisis on 26-27 February, 21-23 March, 26 March, 2-4 April and 3-7 May. The end-of-swarm alarms on 23 March and 4 April were particularly helpful as they were caused by transitions from swarm to tremor shortly preceding explosive eruptions; transitions which were detected much earlier by the swarm algorithm than they were by the tremor algorithm.
Davies, M; Lavalle-González, F; Storms, F; Gomis, R
2008-05-01
For many patients with type 2 diabetes, oral antidiabetic agents (OADs) do not provide optimal glycaemic control, necessitating insulin therapy. Fear of hypoglycaemia is a major barrier to initiating insulin therapy. The AT.LANTUS study investigated optimal methods to initiate and maintain insulin glargine (LANTUS, glargine, Sanofi-aventis, Paris, France) therapy using two treatment algorithms. This subgroup analysis investigated the initiation of once-daily glargine therapy in patients suboptimally controlled on multiple OADs. This study was a 24-week, multinational (59 countries), multicenter (611), randomized study. Algorithm 1 was a clinic-driven titration and algorithm 2 was a patient-driven titration. Titration was based on target fasting blood glucose < or =100 mg/dl (< or =5.5 mmol/l). Algorithms were compared for incidence of severe hypoglycaemia [requiring assistance and blood glucose <50 mg/dl (<2.8 mmol/l)] and baseline to end-point change in haemoglobin A(1c) (HbA(1c)). Of the 4961 patients enrolled in the study, 865 were included in this subgroup analysis: 340 received glargine plus 1 OAD and 525 received glargine plus >1 OAD. Incidence of severe hypoglycaemia was <1%. HbA(1c) decreased significantly between baseline and end-point for patients receiving glargine plus 1 OAD (-1.4%, p < 0.001; algorithm 1 -1.3% vs. algorithm 2 -1.5%; p = 0.03) and glargine plus >1 OAD (-1.7%, p < 0.001; algorithm 1 -1.5% vs. algorithm 2 -1.8%; p = 0.001). This study shows that initiation of once-daily glargine with OADs results in significant reduction of HbA(1c) with a low risk of hypoglycaemia. The greater reduction in HbA(1c) was seen in patients randomized to the patient-driven algorithm (algorithm 2) on 1 or >1 OAD.
An Expert System toward Buiding An Earth Science Knowledge Graph
NASA Astrophysics Data System (ADS)
Zhang, J.; Duan, X.; Ramachandran, R.; Lee, T. J.; Bao, Q.; Gatlin, P. N.; Maskey, M.
2017-12-01
In this ongoing work, we aim to build foundations of Cognitive Computing for Earth Science research. The goal of our project is to develop an end-to-end automated methodology for incrementally constructing Knowledge Graphs for Earth Science (KG4ES). These knowledge graphs can then serve as the foundational components for building cognitive systems in Earth science, enabling researchers to uncover new patterns and hypotheses that are virtually impossible to identify today. In addition, this research focuses on developing mining algorithms needed to exploit these constructed knowledge graphs. As such, these graphs will free knowledge from publications that are generated in a very linear, deterministic manner, and structure knowledge in a way that users can both interact and connect with relevant pieces of information. Our major contributions are two-fold. First, we have developed an end-to-end methodology for constructing Knowledge Graphs for Earth Science (KG4ES) using existing corpus of journal papers and reports. One of the key challenges in any machine learning, especially deep learning applications, is the need for robust and large training datasets. We have developed techniques capable of automatically retraining models and incrementally building and updating KG4ES, based on ever evolving training data. We also adopt the evaluation instrument based on common research methodologies used in Earth science research, especially in Atmospheric Science. Second, we have developed an algorithm to infer new knowledge that can exploit the constructed KG4ES. In more detail, we have developed a network prediction algorithm aiming to explore and predict possible new connections in the KG4ES and aid in new knowledge discovery.
Wind profiling based on the optical beam intensity statistics in a turbulent atmosphere.
Banakh, Victor A; Marakasov, Dimitrii A
2007-10-01
Reconstruction of the wind profile from the statistics of intensity fluctuations of an optical beam propagating in a turbulent atmosphere is considered. The equations for the spatiotemporal correlation function and the spectrum of weak intensity fluctuations of a Gaussian beam are obtained. The algorithms of wind profile retrieval from the spatiotemporal intensity spectrum are described and the results of end-to-end computer experiments on wind profiling based on the developed algorithms are presented. It is shown that the developed algorithms allow retrieval of the wind profile from the turbulent optical beam intensity fluctuations with acceptable accuracy in many practically feasible laser measurements set up in the atmosphere.
The notion of "double consciousness" in Alfred Binet's psychological experimentalism.
Foschi, Renato; Cicciola, Elisabetta
2006-01-01
Between 1889 and 1892, Binet published two remarkable essays, On Double Consciousness and Les alterations de la personnalité, which marked the end of a period of researches and interests closely linked to the doctrines on hypnosis and hysteria elaborated by the Ecole de la Salpêtrière. Later on, Binet was to abandon the utilization of hypnosis as a technique of experimentation, after he realized that the suggestibility of the "subjects" of these experiences had led to major experimental mistakes. However, during the years of his work at the Salpêtrière, he elaborated the notion of "double consciousness," which can be considered an alternative both to Ribot's idea of dissociation and to Janet's idea of disaggregation. The notion of double consciousness reveals both the originality of Binet's psychology--which was elaborated at the end of the nineteenth century--and its verifiable link to twentieth-century psychology. Unlike Janet, in fact, Binet did not support a theory of psychological deficiency or "misery," or of the retraction of the sphere of consciousness, which a normal capacity for psychological synthesis would oppose. On the contrary, Binet's psychology resulted in a theory stating that the duality of consciousness works in a perfect and autonomous way within the individual and, thanks to hypnosis, can be investigated in a laboratory.
NASA Astrophysics Data System (ADS)
Brodic, D.
2011-01-01
Text line segmentation represents the key element in the optical character recognition process. Hence, testing of text line segmentation algorithms has substantial relevance. All previously proposed testing methods deal mainly with text database as a template. They are used for testing as well as for the evaluation of the text segmentation algorithm. In this manuscript, methodology for the evaluation of the algorithm for text segmentation based on extended binary classification is proposed. It is established on the various multiline text samples linked with text segmentation. Their results are distributed according to binary classification. Final result is obtained by comparative analysis of cross linked data. At the end, its suitability for different types of scripts represents its main advantage.
Dynamic Synchronous Capture Algorithm for an Electromagnetic Flowmeter.
Fanjiang, Yong-Yi; Lu, Shih-Wei
2017-04-10
This paper proposes a dynamic synchronous capture (DSC) algorithm to calculate the flow rate for an electromagnetic flowmeter. The characteristics of the DSC algorithm can accurately calculate the flow rate signal and efficiently convert an analog signal to upgrade the execution performance of a microcontroller unit (MCU). Furthermore, it can reduce interference from abnormal noise. It is extremely steady and independent of fluctuations in the flow measurement. Moreover, it can calculate the current flow rate signal immediately (m/s). The DSC algorithm can be applied to the current general MCU firmware platform without using DSP (Digital Signal Processing) or a high-speed and high-end MCU platform, and signal amplification by hardware reduces the demand for ADC accuracy, which reduces the cost.
Gamut extension for cinema: psychophysical evaluation of the state of the art and a new algorithm
NASA Astrophysics Data System (ADS)
Zamir, Syed Waqas; Vazquez-Corral, Javier; Bertalmío, Marcelo
2015-03-01
Wide gamut digital display technology, in order to show its full potential in terms of colors, is creating an opportunity to develop gamut extension algorithms (GEAs). To this end, in this work we present two contributions. First we report a psychophysical evaluation of GEAs specifically for cinema using a digital cinema projector under cinematic (low ambient light) conditions; to the best of our knowledge this is the first evaluation of this kind reported in the literature. Second, we propose a new GEA by introducing simple but key modifications to the algorithm of Zamir et al. This new algorithm performs well in terms of skin tones and memory colors, with results that look natural and which are free from artifacts.
Dynamic Synchronous Capture Algorithm for an Electromagnetic Flowmeter
Fanjiang, Yong-Yi; Lu, Shih-Wei
2017-01-01
This paper proposes a dynamic synchronous capture (DSC) algorithm to calculate the flow rate for an electromagnetic flowmeter. The characteristics of the DSC algorithm can accurately calculate the flow rate signal and efficiently convert an analog signal to upgrade the execution performance of a microcontroller unit (MCU). Furthermore, it can reduce interference from abnormal noise. It is extremely steady and independent of fluctuations in the flow measurement. Moreover, it can calculate the current flow rate signal immediately (m/s). The DSC algorithm can be applied to the current general MCU firmware platform without using DSP (Digital Signal Processing) or a high-speed and high-end MCU platform, and signal amplification by hardware reduces the demand for ADC accuracy, which reduces the cost. PMID:28394306
Haptic device for a ventricular shunt insertion simulator.
Panchaphongsaphak, Bundit; Stutzer, Diego; Schwyter, Etienne; Bernays, René-Ludwig; Riener, Robert
2006-01-01
In this paper we propose a new one-degree-of-freedom haptic device that can be used to simulate ventricular shunt insertion procedures. The device is used together with the BRAINTRAIN training simulator developed for neuroscience education, neurological data visualization and surgical planning. The design of the haptic device is based on a push-pull cable concept. The rendered forces produced by a linear motor connected at one end of the cable are transferred to the user via a sliding mechanism at the end-effector located at the other end of the cable. The end-effector provides the range of movement up to 12 cm. The force is controlled by an open-loop impedance algorithm and can become up to 15 N.
USDA-ARS?s Scientific Manuscript database
Atmosphere-Land Exchange Inverse model and associated disaggregation scheme (ALEXI/DisALEXI). Satellite-based ET retrievals from both the Moderate Resolution Imaging Spectoradiometer (MODIS; 1km, daily) and Landsat (30m, bi-weekly) are fused with The Spatial and Temporal Adaptive Reflective Fusion ...
NASA Astrophysics Data System (ADS)
Tai, Guangfu; Williams, Peter
2013-11-01
Hospitals can be viewed as service enterprises, of which the primary function is to provide specific sets of diagnostic and therapeutic medical services to individual patients. Each patient has certain diagnosis and therapeutic attributes in common with some other patients. Thus, patients with similar medical attributes could be 'processed' in one 'product line' of medical services, and individual treatments for patients within one 'product line' can be regarded as incurring identical consumption of health care resources. This article presents a theoretical framing for resource planning and investment allocation of various resources from a macro perspective of costs that demonstrates the need to plan capacity at the disaggregated resource level. The result of a balanced line ('optimal') is compared with an alternative scheme of 'the same ratio composing of resources' under the same monetary constraints. Thus, it is demonstrated that planning at the disaggregated level affords much better use of resources than achieved in common practice of budget control by simple percentage increase/decrease in distributing a financial vote.
A comparison of force control algorithms for robots in contact with flexible environments
NASA Technical Reports Server (NTRS)
Wilfinger, Lee S.
1992-01-01
In order to perform useful tasks, the robot end-effector must come into contact with its environment. For such tasks, force feedback is frequently used to control the interaction forces. Control of these forces is complicated by the fact that the flexibility of the environment affects the stability of the force control algorithm. Because of the wide variety of different materials present in everyday environments, it is necessary to gain an understanding of how environmental flexibility affects the stability of force control algorithms. This report presents the theory and experimental results of two force control algorithms: Position Accommodation Control and Direct Force Servoing. The implementation of each of these algorithms on a two-arm robotic test bed located in the Center for Intelligent Robotic Systems for Space Exploration (CIRSSE) is discussed in detail. The behavior of each algorithm when contacting materials of different flexibility is experimentally determined. In addition, several robustness improvements to the Direct Force Servoing algorithm are suggested and experimentally verified. Finally, a qualitative comparison of the force control algorithms is provided, along with a description of a general tuning process for each control method.
Dinamarca, M C; Cerpa, W; Garrido, J; Hancke, J L; Inestrosa, N C
2006-11-01
The major protein constituent of amyloid deposits in Alzheimer's disease (AD) is the amyloid beta-peptide (Abeta). In the present work, we have determined the effect of hyperforin an acylphloroglucinol compound isolated from Hypericum perforatum (St John's Wort), on Abeta-induced spatial memory impairments and on Abeta neurotoxicity. We report here that hyperforin: (1) decreases amyloid deposit formation in rats injected with amyloid fibrils in the hippocampus; (2) decreases the neuropathological changes and behavioral impairments in a rat model of amyloidosis; (3) prevents Abeta-induced neurotoxicity in hippocampal neurons both from amyloid fibrils and Abeta oligomers, avoiding the increase in reactive oxidative species associated with amyloid toxicity. Both effects could be explained by the capacity of hyperforin to disaggregate amyloid deposits in a dose and time-dependent manner and to decrease Abeta aggregation and amyloid formation. Altogether these evidences suggest that hyperforin may be useful to decrease amyloid burden and toxicity in AD patients, and may be a putative therapeutic agent to fight the disease.
Concurrent changes in aggregation and swelling of coal particles in solvents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishioka, M.
1995-12-31
A new method of coal swelling has been developed tinder the condition of low coal concentrations with continuous mixing of coal and solvent. The change in particle size distributions by a laser scattering procedure was used for the evaluation of coal swelling. Particle size distributions in good and poor solvents were nearly equal, but reversibly changed in good solvents from time to time. The effects of solubles and coal concentrations on the distributions were small. It was concluded that aggregate d coal particles disaggregate in good solvents, and that an increase in the particle size distribution due to swelling inmore » good solvents are compensated by a decrease in the particle size due to disaggregation. Therefore, the behavior of coal particles in solvents is controlled by aggregation in addition to coal swelling. This implies that an increase in the particle size due to coal swelling in actual processes is not so large as expected by the results obtained from the conventional coal swelling methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bae, Song Yi; Kim, Seulgi; Hwang, Heejin
Research highlights: {yields} Formation of the {alpha}-synuclein amyloid fibrils by [BIMbF{sub 3}Im]. {yields} Disaggregation of amyloid fibrils by epigallocatechin gallate (EGCG) and baicalein. {yields} Amyloid formation of {alpha}-synuclein tandem repeat ({alpha}-TR). -- Abstract: The aggregation of {alpha}-synuclein is clearly related to the pathogenesis of Parkinson's disease. Therefore, detailed understanding of the mechanism of fibril formation is highly valuable for the development of clinical treatment and also of the diagnostic tools. Here, we have investigated the interaction of {alpha}-synuclein with ionic liquids by using several biochemical techniques including Thioflavin T assays and transmission electron microscopy (TEM). Our data shows a rapidmore » formation of {alpha}-synuclein amyloid fibrils was stimulated by 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide [BIMbF{sub 3}Im], and these fibrils could be disaggregated by polyphenols such as epigallocatechin gallate (EGCG) and baicalein. Furthermore, the effect of [BIMbF{sub 3}Im] on the {alpha}-synuclein tandem repeat ({alpha}-TR) in the aggregation process was studied.« less
Action of trypsin on structural changes of collagen fibres from sea cucumber (Stichopus japonicus).
Liu, Zi-Qiang; Tuo, Feng-Yan; Song, Liang; Liu, Yu-Xin; Dong, Xiu-Ping; Li, Dong-Mei; Zhou, Da-Yong; Shahidi, Fereidoon
2018-08-01
Trypsin, a representative serine proteinase, was used to hydrolyse the collagen fibres from sea cucumber (Stichopus japonicus) to highlight the role of serine proteinase in the autolysis of sea cucumber. Partial disaggregation of collagen fibres into collagen fibrils upon trypsin treatment occurred. The trypsin treatment also caused a time-dependent release of water-soluble glycosaminoglycans and proteins. Therefore, the degradation of the proteoglycan bridges between collagen fibrils might account for the disaggregation of collagen fibrils. For trypsin-treated collagen fibres (72 h), the collagen fibrils still kept their structural integrity and showed characteristic D-banding pattern, and the dissolution rate of hydroxyproline was just 0.21%. Meanwhile, Fourier transform infrared analysis showed the collagen within trypsin-treated collagen fibres (72 h) still retaining their triple-helical conformation. These results suggested that serine proteinase participated in the autolysis of S. japonicus body wall by damaging the proteoglycan bridges between collagen fibrils and disintegrating the latter. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Graham, Linda J.; Sweller, Naomi; Van Bergen, Penny
2010-01-01
This article examines the increase in segregated placements in the New South Wales government school sector. Using disaggregated enrolment data, it points to the growing over-representation of boys in special schools and classes, particularly those of a certain age in certain support categories. In the discussion that follows, the authors question…
Automatic parameter selection for feature-based multi-sensor image registration
NASA Astrophysics Data System (ADS)
DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan
2006-05-01
Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.
Energy Use and Carbon Emissions: Some International Comparisons
1994-01-01
Presents energy use and carbon emissions patterns in a world context. The report contrasts trends in economically developed and developing areas of the world since 1970, presents a disaggregated view of the "Group of Seven" (G7) key industrialized countries (Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States) and examines sectoral energy use patterns within each of the G7 countries.
NASA Technical Reports Server (NTRS)
Shultz, Christopher J.; Carey, Lawrence D.; Schultz, Elise V.; Stano, Geoffrey T.; Blakeslee, Richard J.; Goodman, Steven J.
2014-01-01
The presence and rates of total lightning are both correlated to and physically dependent upon storm updraft strength, mixed phase precipitation volume and the size of the charging zone. The updraft modulates the ingredients necessary for electrification within a thunderstorm, while the updraft also plays a critical role in the development of severe and hazardous weather. Therefore utilizing this relationship, the monitoring of lightning rates and jumps provides an additional piece of information on the evolution of a thunderstorm, more often than not, at higher temporal resolution than current operational radar systems. This correlation is the basis for the total lightning jump algorithm that has been developed in recent years. Currently, the lightning jump algorithm is being tested in two separate but important efforts. Schultz et al. (2014; AMS 10th Satellite Symposium) is exploring the transition of the algorithm from its research based formulation to a fully objective algorithm that includes storm tracking, Geostationary Lightning Mapper (GLM) Proxy data and the lightning jump algorithm. Chronis et al. (2014; this conference) provides context for the transition to current operational forecasting using lightning mapping array based products. However, what remains is an end to end physical and dynamical basis for relating lightning rates to severe storm manifestation, so the forecaster has a reason beyond simple correlation to utilize the lightning jump algorithm within their severe storm conceptual models. Therefore, the physical basis for the lightning jump algorithm in relation to severe storm dynamics and microphysics is a key component that must be further explored. Many radar studies have examined flash rates and their relation to updraft strength, updraft volume, precipitation-sized ice mass, etc.; however, relation specifically to lightning jumps is fragmented within the literature. Thus the goal of this study is to use multiple Doppler techniques to resolve the physical and dynamical storm characteristics specifically around the time of the lightning jump. This information will help forecasters anticipate lightning jump occurrence, or even be of use to determine future characteristics of a given storm (e.g., development of a mesocyclone, downdraft, or hail signature on radar), providing additional lead time/confidence in the severe storm warning paradigm.
Small convolution kernels for high-fidelity image restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1991-01-01
An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.
Khan, Tareq; Shrestha, Ravi; Imtiaz, Md. Shamin
2015-01-01
Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At the receiver end, the colour information is extracted from the colour frame and then added to colourise the grey-scale frames. After a certain number of grey-scale frames, another colour frame is sent followed by the same number of grey-scale frames. This process is repeated until the end of the video sequence to maintain the colour similarity. As a result, over 50% of RF transmission power can be saved using the proposed scheme, which will eventually lead to a battery life extension of the capsule by 4–7 h. The reproduced colour images have been evaluated both statistically and subjectively by professional gastroenterologists. The algorithm is finally implemented using a WCE prototype and the performance is validated using an ex-vivo trial. PMID:26609405
Availability and End-to-end Reliability in Low Duty Cycle Multihop Wireless Sensor Networks.
Suhonen, Jukka; Hämäläinen, Timo D; Hännikäinen, Marko
2009-01-01
A wireless sensor network (WSN) is an ad-hoc technology that may even consist of thousands of nodes, which necessitates autonomic, self-organizing and multihop operations. A typical WSN node is battery powered, which makes the network lifetime the primary concern. The highest energy efficiency is achieved with low duty cycle operation, however, this alone is not enough. WSNs are deployed for different uses, each requiring acceptable Quality of Service (QoS). Due to the unique characteristics of WSNs, such as dynamic wireless multihop routing and resource constraints, the legacy QoS metrics are not feasible as such. We give a new definition to measure and implement QoS in low duty cycle WSNs, namely availability and reliability. Then, we analyze the effect of duty cycling for reaching the availability and reliability. The results are obtained by simulations with ZigBee and proprietary TUTWSN protocols. Based on the results, we also propose a data forwarding algorithm suitable for resource constrained WSNs that guarantees end-to-end reliability while adding a small overhead that is relative to the packet error rate (PER). The forwarding algorithm guarantees reliability up to 30% PER.
Structural pathway of regulated substrate transfer and threading through an Hsp100 disaggregase.
Deville, Célia; Carroni, Marta; Franke, Kamila B; Topf, Maya; Bukau, Bernd; Mogk, Axel; Saibil, Helen R
2017-08-01
Refolding aggregated proteins is essential in combating cellular proteotoxic stress. Together with Hsp70, Hsp100 chaperones, including Escherichia coli ClpB, form a powerful disaggregation machine that threads aggregated polypeptides through the central pore of tandem adenosine triphosphatase (ATPase) rings. To visualize protein disaggregation, we determined cryo-electron microscopy structures of inactive and substrate-bound ClpB in the presence of adenosine 5'- O -(3-thiotriphosphate), revealing closed AAA+ rings with a pronounced seam. In the substrate-free state, a marked gradient of resolution, likely corresponding to mobility, spans across the AAA+ rings with a dynamic hotspot at the seam. On the seam side, the coiled-coil regulatory domains are locked in a horizontal, inactive orientation. On the opposite side, the regulatory domains are accessible for Hsp70 binding, substrate targeting, and activation. In the presence of the model substrate casein, the polypeptide threads through the entire pore channel and increased nucleotide occupancy correlates with higher ATPase activity. Substrate-induced domain displacements indicate a pathway of regulated substrate transfer from Hsp70 to the ClpB pore, inside which a spiral of loops contacts the substrate. The seam pore loops undergo marked displacements, along with ordering of the regulatory domains. These asymmetric movements suggest a mechanism for ATPase activation and substrate threading during disaggregation.
NASA Technical Reports Server (NTRS)
Oda, T.; Ott, L.; Lauvaux, T.; Feng, S.; Bun, R.; Roman, M.; Baker, D. F.; Pawson, S.
2017-01-01
Fossil fuel carbon dioxide (CO2) emissions (FFCO2) are the largest input to the global carbon cycle on a decadal time scale. Because total emissions are assumed to be reasonably well constrained by fuel statistics, FFCO2 often serves as a reference in order to deduce carbon uptake by poorly understood terrestrial and ocean sinks. Conventional atmospheric CO2 flux inversions solve for spatially explicit regional sources and sinks and estimate land and ocean fluxes by subtracting FFCO2. Thus, errors in FFCO2 can propagate into the final inferred flux estimates. Gridded emissions are often based on disaggregation of emissions estimated at national or regional level. Although national and regional total FFCO2 are well known, gridded emission fields are subject to additional uncertainties due to the emission disaggregation. Assessing such uncertainties is often challenging because of the lack of physical measurements for evaluation. We first review difficulties in assessing uncertainties associated with gridded FFCO2 emission data and present several approaches for evaluation of such uncertainties at multiple scales. Given known limitations, inter-emission data differences are often used as a proxy for the uncertainty. The popular approach allows us to characterize differences in emissions, but does not allow us to fully quantify emission disaggregation biases. Our work aims to vicariously evaluate FFCO2 emission data using atmospheric models and measurements. We show a global simulation experiment where uncertainty estimates are propagated as an atmospheric tracer (uncertainty tracer) alongside CO2 in NASA's GEOS model and discuss implications of FFCO2 uncertainties in the context of flux inversions. We also demonstrate the use of high resolution urban CO2 simulations as a tool for objectively evaluating FFCO2 data over intense emission regions. Though this study focuses on FFCO2 emission data, the outcome of this study could also help improve the knowledge of similar gridded emissions data for non-CO2 compounds with similar emission characteristics.
NASA Astrophysics Data System (ADS)
Oda, T.; Ott, L. E.; Lauvaux, T.; Feng, S.; Bun, R.; Roman, M. O.; Baker, D. F.; Pawson, S.
2017-12-01
Fossil fuel carbon dioxide (CO2) emissions (FFCO2) are the largest input to the global carbon cycle on a decadal time scale. Because total emissions are assumed to be reasonably well constrained by fuel statistics, FFCO2 often serves as a reference in order to deduce carbon uptake by poorly understood terrestrial and ocean sinks. Conventional atmospheric CO2 flux inversions solve for spatially explicit regional sources and sinks and estimate land and ocean fluxes by subtracting FFCO2. Thus, errors in FFCO2 can propagate into the final inferred flux estimates. Gridded emissions are often based on disaggregation of emissions estimated at national or regional level. Although national and regional total FFCO2 are well known, gridded emission fields are subject to additional uncertainties due to the emission disaggregation. Assessing such uncertainties is often challenging because of the lack of physical measurements for evaluation. We first review difficulties in assessing uncertainties associated with gridded FFCO2 emission data and present several approaches for evaluation of such uncertainties at multiple scales. Given known limitations, inter-emission data differences are often used as a proxy for the uncertainty. The popular approach allows us to characterize differences in emissions, but does not allow us to fully quantify emission disaggregation biases. Our work aims to vicariously evaluate FFCO2 emission data using atmospheric models and measurements. We show a global simulation experiment where uncertainty estimates are propagated as an atmospheric tracer (uncertainty tracer) alongside CO2 in NASA's GEOS model and discuss implications of FFCO2 uncertainties in the context of flux inversions. We also demonstrate the use of high resolution urban CO2 simulations as a tool for objectively evaluating FFCO2 data over intense emission regions. Though this study focuses on FFCO2 emission data, the outcome of this study could also help improve the knowledge of similar gridded emissions data for non-CO2 compounds that share emission sectors.
A generic method for improving the spatial interoperability of medical and ecological databases.
Ghenassia, A; Beuscart, J B; Ficheur, G; Occelli, F; Babykina, E; Chazard, E; Genin, M
2017-10-03
The availability of big data in healthcare and the intensive development of data reuse and georeferencing have opened up perspectives for health spatial analysis. However, fine-scale spatial studies of ecological and medical databases are limited by the change of support problem and thus a lack of spatial unit interoperability. The use of spatial disaggregation methods to solve this problem introduces errors into the spatial estimations. Here, we present a generic, two-step method for merging medical and ecological databases that avoids the use of spatial disaggregation methods, while maximizing the spatial resolution. Firstly, a mapping table is created after one or more transition matrices have been defined. The latter link the spatial units of the original databases to the spatial units of the final database. Secondly, the mapping table is validated by (1) comparing the covariates contained in the two original databases, and (2) checking the spatial validity with a spatial continuity criterion and a spatial resolution index. We used our novel method to merge a medical database (the French national diagnosis-related group database, containing 5644 spatial units) with an ecological database (produced by the French National Institute of Statistics and Economic Studies, and containing with 36,594 spatial units). The mapping table yielded 5632 final spatial units. The mapping table's validity was evaluated by comparing the number of births in the medical database and the ecological databases in each final spatial unit. The median [interquartile range] relative difference was 2.3% [0; 5.7]. The spatial continuity criterion was low (2.4%), and the spatial resolution index was greater than for most French administrative areas. Our innovative approach improves interoperability between medical and ecological databases and facilitates fine-scale spatial analyses. We have shown that disaggregation models and large aggregation techniques are not necessarily the best ways to tackle the change of support problem.
Development of allergic sensitization and its relevance to paediatric asthma.
Oksel, Ceyda; Custovic, Adnan
2018-04-01
The purpose of this review is to summarize the recent evidence on the distinct atopic phenotypes and their relationship with childhood asthma. We start by considering definitions and phenotypic classification of atopy and then review evidence on its association with asthma in children. It is now well recognized that both asthma and atopy are complex entities encompassing various different sub-groups that also differ in the way they interconnect. The lack of gold standards for diagnostic markers of atopy and asthma further adds to the existing complexity over diagnostic accuracy and definitions. Although recent statistical phenotyping studies contributed significantly to our understanding of these heterogeneous disorders, translating these findings into meaningful information and effective therapies requires further work on understanding underpinning biological mechanisms. The disaggregation of allergic sensitization may help predict how the allergic disease is likely to progress. One of the important questions is how best to incorporate tests for the assessment of allergic sensitization into diagnostic algorithms for asthma, both in terms of confirming asthma diagnosis, and the assessment of future risk.
Qutrit witness from the Grothendieck constant of order four
NASA Astrophysics Data System (ADS)
Diviánszky, Péter; Bene, Erika; Vértesi, Tamás
2017-07-01
In this paper, we prove that KG(3 )
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, Evan
There exist hundreds of building energy software tools, both web- and disk-based. These tools exhibit considerable range in approach and creativity, with some being highly specialized and others able to consider the building as a whole. However, users are faced with a dizzying array of choices and, often, conflicting results. The fragmentation of development and deployment efforts has hampered tool quality and market penetration. The purpose of this review is to provide information for defining the desired characteristics of residential energy tools, and to encourage future tool development that improves on current practice. This project entails (1) creating a frameworkmore » for describing possible technical and functional characteristics of such tools, (2) mapping existing tools onto this framework, (3) exploring issues of tool accuracy, and (4) identifying ''best practice'' and strategic opportunities for tool design. evaluated 50 web-based residential calculators, 21 of which we regard as ''whole-house'' tools(i.e., covering a range of end uses). Of the whole-house tools, 13 provide open-ended energy calculations, 5 normalize the results to actual costs (a.k.a ''bill-disaggregation tools''), and 3 provide both options. Across the whole-house tools, we found a range of 5 to 58 house-descriptive features (out of 68 identified in our framework) and 2 to 41 analytical and decision-support features (55 possible). We also evaluated 15 disk-based residential calculators, six of which are whole-house tools. Of these tools, 11 provide open-ended calculations, 1 normalizes the results to actual costs, and 3 provide both options. These tools offered ranges of 18 to 58 technical features (70 possible) and 10 to 40 user- and decision-support features (56 possible). The comparison shows that such tools can employ many approaches and levels of detail. Some tools require a relatively small number of well-considered inputs while others ask a myriad of questions and still miss key issues. The value of detail has a lot to do with the type of question(s) being asked by the user (e.g., the availability of dozens of miscellaneous appliances is immaterial for a user attempting to evaluate the potential for space-heating savings by installing a new furnace). More detail does not, according to our evaluation, automatically translate into a ''better'' or ''more accurate'' tool. Efforts to quantify and compare the ''accuracy'' of these tools are difficult at best, and prior tool-comparison studies have not undertaken this in a meaningful way. The ability to evaluate accuracy is inherently limited by the availability of measured data. Furthermore, certain tool outputs can only be measured against ''actual'' values that are themselves calculated (e.g., HVAC sizing), while others are rarely if ever available (e.g., measured energy use or savings for specific measures). Similarly challenging is to understand the sources of inaccuracies. There are many ways in which quantitative errors can occur in tools, ranging from programming errors to problems inherent in a tool's design. Due to hidden assumptions and non-variable ''defaults'', most tools cannot be fully tested across the desirable range of building configurations, operating conditions, weather locations, etc. Many factors conspire to confound performance comparisons among tools. Differences in inputs can range from weather city, to types of HVAC systems, to appliance characteristics, to occupant-driven effects such as thermostat management. Differences in results would thus no doubt emerge from an extensive comparative exercise, but the sources or implications of these differences for the purposes of accuracy evaluation or tool development would remain largely unidentifiable (especially given the paucity of technical documentation available for most tools). For the tools that we tested, the predicted energy bills for a single test building ranged widely (by nearly a factor of three), and far more so at the end-use level. Most tools over-predicted energy bills and all over-predicted consumption. Variability was lower among disk-based tools,but they more significantly over-predicted actual use. The deviations (over-predictions) we observed from actual bills corresponded to up to $1400 per year (approx. 250 percent of the actual bills). For bill-disaggregation tools, wherein the results are forced to equal actual bills, the accuracy issue shifts to whether or not the total is properly attributed to the various end uses and to whether savings calculations are done accurately (a challenge that demands relatively rare end-use data). Here, too, we observed a number of dubious results. Energy savings estimates automatically generated by the web-based tools varied from $46/year (5 percent of predicted use) to $625/year (52 percent of predicted use).« less
Enhanced K-means clustering with encryption on cloud
NASA Astrophysics Data System (ADS)
Singh, Iqjot; Dwivedi, Prerna; Gupta, Taru; Shynu, P. G.
2017-11-01
This paper tries to solve the problem of storing and managing big files over cloud by implementing hashing on Hadoop in big-data and ensure security while uploading and downloading files. Cloud computing is a term that emphasis on sharing data and facilitates to share infrastructure and resources.[10] Hadoop is an open source software that gives us access to store and manage big files according to our needs on cloud. K-means clustering algorithm is an algorithm used to calculate distance between the centroid of the cluster and the data points. Hashing is a algorithm in which we are storing and retrieving data with hash keys. The hashing algorithm is called as hash function which is used to portray the original data and later to fetch the data stored at the specific key. [17] Encryption is a process to transform electronic data into non readable form known as cipher text. Decryption is the opposite process of encryption, it transforms the cipher text into plain text that the end user can read and understand well. For encryption and decryption we are using Symmetric key cryptographic algorithm. In symmetric key cryptography are using DES algorithm for a secure storage of the files. [3
Efficient dynamic simulation for multiple chain robotic mechanisms
NASA Technical Reports Server (NTRS)
Lilly, Kathryn W.; Orin, David E.
1989-01-01
An efficient O(mN) algorithm for dynamic simulation of simple closed-chain robotic mechanisms is presented, where m is the number of chains, and N is the number of degrees of freedom for each chain. It is based on computation of the operational space inertia matrix (6 x 6) for each chain as seen by the body, load, or object. Also, computation of the chain dynamics, when opened at one end, is required, and the most efficient algorithm is used for this purpose. Parallel implementation of the dynamics for each chain results in an O(N) + O(log sub 2 m+1) algorithm.
NASA Technical Reports Server (NTRS)
Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David
2015-01-01
The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM) early in the development lifecycle for the SLS program, NASA formed the M&FM team as part of the Integrated Systems Health Management and Automation Branch under the Spacecraft Vehicle Systems Department at the Marshall Space Flight Center (MSFC). To support the development of the FM algorithms, the VMET developed by the M&FM team provides the ability to integrate the algorithms, perform test cases, and integrate vendor-supplied physics-based launch vehicle (LV) subsystem models. Additionally, the team has developed processes for implementing and validating the M&FM algorithms for concept validation and risk reduction. The flexibility of the VMET capabilities enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS, GNC, and others. One of the principal functions of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software test and validation processes. In any software development process there is inherent risk in the interpretation and implementation of concepts from requirements and test cases into flight software compounded with potential human errors throughout the development and regression testing lifecycle. Risk reduction is addressed by the M&FM group but in particular by the Analysis Team working with other organizations such as S&MA, Structures and Environments, GNC, Orion, Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission (LOM) and Loss of Crew (LOC) probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses to be tested in VMET to ensure reliable failure detection, and confirm responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - the ARINC 6535-partitioned Operating System, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by FSW. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure their effectiveness and performance in the exterior FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.
The ascent of kimberlite: Insights from olivine
NASA Astrophysics Data System (ADS)
Brett, R. C.; Russell, J. K.; Andrews, G. D. M.; Jones, T. J.
2015-08-01
Olivine xenocrysts are ubiquitous in kimberlite deposits worldwide and derive from the disaggregation of mantle-derived peridotitic xenoliths. Here, we provide descriptions of textural features in xenocrystic olivine from kimberlite deposits at the Diavik Diamond Mine, Canada and at Igwisi Hills volcano, Tanzania. We establish a relative sequence of textural events recorded by olivine during magma ascent through the cratonic mantle lithosphere, including: xenolith disaggregation, decompression fracturing expressed as mineral- and fluid-inclusion-rich sealed and healed cracks, grain size and shape modification by chemical dissolution and abrasion, late-stage crystallization of overgrowths on olivine xenocrysts, and lastly, mechanical milling and rounding of the olivine cargo prior to emplacement. Ascent through the lithosphere operates as a "kimberlite factory" wherein progressive upward dyke propagation of the initial carbonatitic melt fractures the overlying mantle to entrain and disaggregate mantle xenoliths. Preferential assimilation of orthopyroxene (Opx) xenocrysts by the silica-undersaturated carbonatitic melt leads to deep-seated exsolution of CO2-rich fluid generating buoyancy and supporting rapid ascent. Concomitant dissolution of olivine produces irregular-shaped relict grains preserved as cores to most kimberlitic olivine. Multiple generations of decompression cracks in olivine provide evidence for a progression in ambient fluid compositions (e.g., from carbonatitic to silicic) during ascent. Numerical modelling predicts tensile failure of xenoliths (disaggregation) and olivine (cracks) over ascent distances of 2-7 km and 15-25 km, respectively, at velocities of 0.1 to >4 m s-1. Efficient assimilation of Opx during ascent results in a silica-enriched, olivine-saturated kimberlitic melt (i.e. SiO2 >20 wt.%) that crystallizes overgrowths on partially digested and abraded olivine xenocrysts. Olivine saturation is constrained to occur at pressures <1 GPa; an absence of decompression cracks within olivine overgrowths suggests depths <25 km. Late stage (<25 km) resurfacing and reshaping of olivine by particle-particle milling is indicative of turbulent flow conditions within a fully fluidized, gas-charged, crystal-rich magma.
The Challenge of City-Level Data-Gathering for Implementing SDG 11 in Africa
NASA Astrophysics Data System (ADS)
Elias, P. O.
2017-12-01
Implementing sustainable development goal 11 in Africa which includes measuring and monitoring social and economic welfare indicators at the city-level requires data of the best quality. In recent years, there have been progress in national statistics and censuses survey yet data gathering in many African countries are not accurate, timely, disaggregated and widely usable. This often diminish the capability of governments to tackle urban development challenges, which are particularly exacerbated by inequality, poverty and uncontrolled development especially in cities. To support knowledge driven decisions and policies there is need to improve data-gathering systems about health, education, and safety, economy and poverty, land, housing and environment, trade and commerce, population and demography. Also, the underlying dynamics, processes, distributions, patterns, trends or disparities inherent in African cities require the breaking down of aggregated data into their component parts or smaller units, which underscores urban data revolution towards achieving SDG 11. In Africa, the process of bringing together diverse data communities to embrace a diverse range of data sources, tools, and innovative technologies, to provide disaggregated data for decision-making, service delivery and citizen engagement is still emerging. Several factors are inhibiting urban data revolution and need to be overturned before we can provide more evidence, more data and more certainty for decision makers towards achieving urban development targets and sustainable cities for Africa. The paper examines the challenge of city-level data-gathering for implementing SDG 11 in Africa. Specifically, it examines the role of cities in implementing SDG 11 in Africa and the need to disaggregate data at city-level; it assesses existing data sources, compilation and dissemination channels as well as the challenges of deploying innovative techniques and strategies including digital and social media platforms and concludes with suggesting sustainable options for evolving cutting-edge strategies for integrating diverse data communities for responsible city-level data-gathering that is reliable, timely, disaggregated and widely usable.
Centralized Routing and Scheduling Using Multi-Channel System Single Transceiver in 802.16d
NASA Astrophysics Data System (ADS)
Al-Hemyari, A.; Noordin, N. K.; Ng, Chee Kyun; Ismail, A.; Khatun, S.
This paper proposes a cross-layer optimized strategy that reduces the effect of interferences from neighboring nodes within a mesh networks. This cross-layer design relies on the routing information in network layer and the scheduling table in medium access control (MAC) layer. A proposed routing algorithm in network layer is exploited to find the best route for all subscriber stations (SS). Also, a proposed centralized scheduling algorithm in MAC layer is exploited to assign a time slot for each possible node transmission. The cross-layer optimized strategy is using multi-channel single transceiver and single channel single transceiver systems for WiMAX mesh networks (WMNs). Each node in WMN has a transceiver that can be tuned to any available channel for eliminating the secondary interference. Among the considered parameters in the performance analysis are interference from the neighboring nodes, hop count to the base station (BS), number of children per node, slot reuse, load balancing, quality of services (QoS), and node identifier (ID). Results show that the proposed algorithms significantly improve the system performance in terms of length of scheduling, channel utilization ratio (CUR), system throughput, and average end to end transmission delay.
Data Analytics for Smart Parking Applications.
Piovesan, Nicola; Turi, Leo; Toigo, Enrico; Martinez, Borja; Rossi, Michele
2016-09-23
We consider real-life smart parking systems where parking lot occupancy data are collected from field sensor devices and sent to backend servers for further processing and usage for applications. Our objective is to make these data useful to end users, such as parking managers, and, ultimately, to citizens. To this end, we concoct and validate an automated classification algorithm having two objectives: (1) outlier detection: to detect sensors with anomalous behavioral patterns, i.e., outliers; and (2) clustering: to group the parking sensors exhibiting similar patterns into distinct clusters. We first analyze the statistics of real parking data, obtaining suitable simulation models for parking traces. We then consider a simple classification algorithm based on the empirical complementary distribution function of occupancy times and show its limitations. Hence, we design a more sophisticated algorithm exploiting unsupervised learning techniques (self-organizing maps). These are tuned following a supervised approach using our trace generator and are compared against other clustering schemes, namely expectation maximization, k-means clustering and DBSCAN, considering six months of data from a real sensor deployment. Our approach is found to be superior in terms of classification accuracy, while also being capable of identifying all of the outliers in the dataset.
Geodetic Finite-Fault-based Earthquake Early Warning Performance for Great Earthquakes Worldwide
NASA Astrophysics Data System (ADS)
Ruhl, C. J.; Melgar, D.; Grapenthin, R.; Allen, R. M.
2017-12-01
GNSS-based earthquake early warning (EEW) algorithms estimate fault-finiteness and unsaturated moment magnitude for the largest, most damaging earthquakes. Because large events are infrequent, algorithms are not regularly exercised and insufficiently tested on few available datasets. The Geodetic Alarm System (G-larmS) is a GNSS-based finite-fault algorithm developed as part of the ShakeAlert EEW system in the western US. Performance evaluations using synthetic earthquakes offshore Cascadia showed that G-larmS satisfactorily recovers magnitude and fault length, providing useful alerts 30-40 s after origin time and timely warnings of ground motion for onshore urban areas. An end-to-end test of the ShakeAlert system demonstrated the need for GNSS data to accurately estimate ground motions in real-time. We replay real data from several subduction-zone earthquakes worldwide to demonstrate the value of GNSS-based EEW for the largest, most damaging events. We compare predicted ground acceleration (PGA) from first-alert-solutions with those recorded in major urban areas. In addition, where applicable, we compare observed tsunami heights to those predicted from the G-larmS solutions. We show that finite-fault inversion based on GNSS-data is essential to achieving the goals of EEW.
Data Analytics for Smart Parking Applications
Piovesan, Nicola; Turi, Leo; Toigo, Enrico; Martinez, Borja; Rossi, Michele
2016-01-01
We consider real-life smart parking systems where parking lot occupancy data are collected from field sensor devices and sent to backend servers for further processing and usage for applications. Our objective is to make these data useful to end users, such as parking managers, and, ultimately, to citizens. To this end, we concoct and validate an automated classification algorithm having two objectives: (1) outlier detection: to detect sensors with anomalous behavioral patterns, i.e., outliers; and (2) clustering: to group the parking sensors exhibiting similar patterns into distinct clusters. We first analyze the statistics of real parking data, obtaining suitable simulation models for parking traces. We then consider a simple classification algorithm based on the empirical complementary distribution function of occupancy times and show its limitations. Hence, we design a more sophisticated algorithm exploiting unsupervised learning techniques (self-organizing maps). These are tuned following a supervised approach using our trace generator and are compared against other clustering schemes, namely expectation maximization, k-means clustering and DBSCAN, considering six months of data from a real sensor deployment. Our approach is found to be superior in terms of classification accuracy, while also being capable of identifying all of the outliers in the dataset. PMID:27669259
NASA Astrophysics Data System (ADS)
Tran, Quoc Quan; Willems, Patrick; Pannemans, Bart; Blanckaert, Joris; Pereira, Fernando; Nossent, Jiri; Cauwenberghs, Kris; Vansteenkiste, Thomas
2015-04-01
Based on an international literature review on model structures of existing rainfall-runoff and hydrological models, a generalized model structure is proposed. It consists of different types of meteorological components, storage components, splitting components and routing components. They can be spatially organized in a lumped way, or on a grid, spatially interlinked by source-to-sink or grid-to-grid (cell-to-cell) routing. The grid size of the model can be chosen depending on the application. The user can select/change the spatial resolution depending on the needs and/or the evaluation of the accuracy of the model results, or use different spatial resolutions in parallel for different applications. Major research questions addressed during the study are: How can we assure consistent results of the model at any spatial detail? How can we avoid strong or sudden changes in model parameters and corresponding simulation results, when one moves from one level of spatial detail to another? How can we limit the problem of overparameterization/equifinality when we move from the lumped model to the spatially distributed model? The proposed approach is a step-wise one, where first the lumped conceptual model is calibrated using a systematic, data-based approach, followed by a disaggregation step where the lumped parameters are disaggregated based on spatial catchment characteristics (topography, land use, soil characteristics). In this way, disaggregation can be done down to any spatial scale, and consistently among scales. Only few additional calibration parameters are introduced to scale the absolute spatial differences in model parameters, but keeping the relative differences as obtained from the spatial catchment characteristics. After calibration of the spatial model, the accuracies of the lumped and spatial models were compared for peak, low and cumulative runoff total and sub-flows (at downstream and internal gauging stations). For the distributed models, additional validation on spatial results was done for the groundwater head values at observation wells. To ensure that the lumped model can produce results as accurate as the spatially distributed models or close regardless to the number of parameters and implemented physical processes, it was checked whether the structure of the lumped models had to be adjusted. The concept has been implemented in a PCRaster - Python platform and tested for two Belgian case studies (catchments of the rivers Dijle and Grote Nete). So far, use is made of existing model structures (NAM, PDM, VHM and HBV). Acknowledgement: These results were obtained within the scope of research activities for the Flemish Environment Agency (VMM) - division Operational Water Management on "Next Generation hydrological modeling", in cooperation with IMDC consultants, and for Flanders Hydraulics Research (Waterbouwkundig Laboratorium) on "Effect of climate change on the hydrological regime of navigable watercourses in Belgium".
Quick Vegas: Improving Performance of TCP Vegas for High Bandwidth-Delay Product Networks
NASA Astrophysics Data System (ADS)
Chan, Yi-Cheng; Lin, Chia-Liang; Ho, Cheng-Yuan
An important issue in designing a TCP congestion control algorithm is that it should allow the protocol to quickly adjust the end-to-end communication rate to the bandwidth on the bottleneck link. However, the TCP congestion control may function poorly in high bandwidth-delay product networks because of its slow response with large congestion windows. In this paper, we propose an enhanced version of TCP Vegas called Quick Vegas, in which we present an efficient congestion window control algorithm for a TCP source. Our algorithm improves the slow-start and congestion avoidance techniques of original Vegas. Simulation results show that Quick Vegas significantly improves the performance of connections as well as remaining fair when the bandwidth-delay product increases.
Use of raw materials in the United States from 1900 through 2014
Matos, Grecia R.
2017-08-22
The economic growth of an industrialized nation such as the United States requires raw materials for construction (buildings, bridges, highways, and so forth), defense, and processing and manufacture of goods and services. Since the beginning of the 20th century, the types and quantities of raw materials used have increased and changed significantly. This fact sheet quantifies the amounts of raw materials (other than food and fuel) that have been used in the U.S. economy annually for a period of 115 years, from 1900 through 2014. It provides a broad overview of the quantity (weight) of nonfood and nonfuel materials used in the economy and illustrates the use and significance of raw nonfuel minerals in particular as building blocks of society.These data have been compiled to help the public and policymakers understand the changing annual flow of raw materials put into use in the United States. Such information can be helpful in assessing the potential effects of materials use on the environment, assessing materials’ intensity of use, and examining the role that these materials play in the economy. The data presented indicate the substitution and shift in materials usage from renewable to nonrenewable materials during the 20th century. The disaggregated quantities by commodity (not shown in this fact sheet) may be tested against supply adequacy and end of life issues.
Cheng, Jun; Zhao, Fei; Xia, Yinyin; Zhang, Hui; Wilkinson, Ewan; Das, Mrinalini; Li, Jie; Chen, Wei; Hu, Dongmei; Jeyashree, Kathiresan; Wang, Lixia
2017-01-01
Objective To calculate the yield and cost per diagnosed tuberculosis (TB) case for three World Health Organization screening algorithms and one using the Chinese National TB program (NTP) TB suspect definitions, using data from a TB prevalence survey of people aged 65 years and over in China, 2013. Methods This was an analytic study using data from the above survey. Risk groups were defined and the prevalence of new TB cases in each group calculated. Costs of each screening component were used to give indicative costs per case detected. Yield, number needed to screen (NNS) and cost per case were used to assess the algorithms. Findings The prevalence survey identified 172 new TB cases in 34,250 participants. Prevalence varied greatly in different groups, from 131/100,000 to 4651/ 100,000. Two groups were chosen to compare the algorithms. The medium-risk group (living in a rural area: men, or previous TB case, or close contact or a BMI <18.5, or tobacco user) had appreciably higher cost per case (USD 221, 298 and 963) in the three algorithms than the high-risk group (all previous TB cases, all close contacts). (USD 72, 108 and 309) but detected two to four times more TB cases in the population. Using a Chest x-ray as the initial screening tool in the medium risk group cost the most (USD 963), and detected 67% of all the new cases. Using the NTP definition of TB suspects made little difference. Conclusions To “End TB”, many more TB cases have to be identified. Screening only the highest risk groups identified under 14% of the undetected cases,. To “End TB”, medium risk groups will need to be screened. Using a CXR for initial screening results in a much higher yield, at what should be an acceptable cost. PMID:28594824
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.
Implementing a self-structuring data learning algorithm
NASA Astrophysics Data System (ADS)
Graham, James; Carson, Daniel; Ternovskiy, Igor
2016-05-01
In this paper, we elaborate on what we did to implement our self-structuring data learning algorithm. To recap, we are working to develop a data learning algorithm that will eventually be capable of goal driven pattern learning and extrapolation of more complex patterns from less complex ones. At this point we have developed a conceptual framework for the algorithm, but have yet to discuss our actual implementation and the consideration and shortcuts we needed to take to create said implementation. We will elaborate on our initial setup of the algorithm and the scenarios we used to test our early stage algorithm. While we want this to be a general algorithm, it is necessary to start with a simple scenario or two to provide a viable development and testing environment. To that end, our discussion will be geared toward what we include in our initial implementation and why, as well as what concerns we may have. In the future, we expect to be able to apply our algorithm to a more general approach, but to do so within a reasonable time, we needed to pick a place to start.