A Bayesian Framework of Uncertainties Integration in 3D Geological Model
NASA Astrophysics Data System (ADS)
Liang, D.; Liu, X.
2017-12-01
3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.
Integrating Stomach Content and Stable Isotope Analyses to Quantify the Diets of Pygoscelid Penguins
Polito, Michael J.; Trivelpiece, Wayne Z.; Karnovsky, Nina J.; Ng, Elizabeth; Patterson, William P.; Emslie, Steven D.
2011-01-01
Stomach content analysis (SCA) and more recently stable isotope analysis (SIA) integrated with isotopic mixing models have become common methods for dietary studies and provide insight into the foraging ecology of seabirds. However, both methods have drawbacks and biases that may result in difficulties in quantifying inter-annual and species-specific differences in diets. We used these two methods to simultaneously quantify the chick-rearing diet of Chinstrap (Pygoscelis antarctica) and Gentoo (P. papua) penguins and highlight methods of integrating SCA data to increase accuracy of diet composition estimates using SIA. SCA biomass estimates were highly variable and underestimated the importance of soft-bodied prey such as fish. Two-source, isotopic mixing model predictions were less variable and identified inter-annual and species-specific differences in the relative amounts of fish and krill in penguin diets not readily apparent using SCA. In contrast, multi-source isotopic mixing models had difficulty estimating the dietary contribution of fish species occupying similar trophic levels without refinement using SCA-derived otolith data. Overall, our ability to track inter-annual and species-specific differences in penguin diets using SIA was enhanced by integrating SCA data to isotopic mixing modes in three ways: 1) selecting appropriate prey sources, 2) weighting combinations of isotopically similar prey in two-source mixing models and 3) refining predicted contributions of isotopically similar prey in multi-source models. PMID:22053199
LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang
2015-03-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.
LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188
An integrated multi-source energy harvester based on vibration and magnetic field energy
NASA Astrophysics Data System (ADS)
Hu, Zhengwen; Qiu, Jing; Wang, Xian; Gao, Yuan; Liu, Xin; Chang, Qijie; Long, Yibing; He, Xingduo
2018-05-01
In this paper, an integrated multi-source energy harvester (IMSEH) employing a special shaped cantilever beam and a piezoelectric transducer to convert vibration and magnetic field energy into electrical energy is presented. The electric output performance of the proposed IMSEH has been investigated. Compared to a traditional multi-source energy harvester (MSEH) or single source energy harvester (SSEH), the proposed IMSEH can simultaneously harvest vibration and magnetic field energy with an integrated structure and the electric output is greatly improved. When other conditions keep identical, the IMSEH can obtain high voltage of 12.8V. Remarkably, the proposed IMSEHs have great potential for its application in wireless sensor network.
Multisource oil spill detection
NASA Astrophysics Data System (ADS)
Salberg, Arnt B.; Larsen, Siri O.; Zortea, Maciel
2013-10-01
In this paper we discuss how multisource data (wind, ocean-current, optical, bathymetric, automatic identification systems (AIS)) may be used to improve oil spill detection in SAR images, with emphasis on the use of automatic oil spill detection algorithms. We focus particularly on AIS, optical, and bathymetric data. For the AIS data we propose an algorithm for integrating AIS ship tracks into automatic oil spill detection in order to improve the confidence estimate of a potential oil spill. We demonstrate the use of ancillary data on a set of SAR images. Regarding the use of optical data, we did not observe a clear correspondence between high chlorophyll values (estimated from products derived from optical data) and observed slicks in the SAR image. Bathymetric data was shown to be a good data source for removing false detections caused by e.g. sand banks on low tide. For the AIS data we observed that a polluter could be identified for some dark slicks, however, a precise oil drift model is needed in order to identify the polluter with high certainty.
Cui, Tianxiang; Wang, Yujie; Sun, Rui; Qiao, Chen; Fan, Wenjie; Jiang, Guoqing; Hao, Lvyuan; Zhang, Lei
2016-01-01
Estimating gross primary production (GPP) and net primary production (NPP) are significant important in studying carbon cycles. Using models driven by multi-source and multi-scale data is a promising approach to estimate GPP and NPP at regional and global scales. With a focus on data that are openly accessible, this paper presents a GPP and NPP model driven by remotely sensed data and meteorological data with spatial resolutions varying from 30 m to 0.25 degree and temporal resolutions ranging from 3 hours to 1 month, by integrating remote sensing techniques and eco-physiological process theories. Our model is also designed as part of the Multi-source data Synergized Quantitative (MuSyQ) Remote Sensing Production System. In the presented MuSyQ-NPP algorithm, daily GPP for a 10-day period was calculated as a product of incident photosynthetically active radiation (PAR) and its fraction absorbed by vegetation (FPAR) using a light use efficiency (LUE) model. The autotrophic respiration (Ra) was determined using eco-physiological process theories and the daily NPP was obtained as the balance between GPP and Ra. To test its feasibility at regional scales, our model was performed in an arid and semi-arid region of Heihe River Basin, China to generate daily GPP and NPP during the growing season of 2012. The results indicated that both GPP and NPP exhibit clear spatial and temporal patterns in their distribution over Heihe River Basin during the growing season due to the temperature, water and solar influx conditions. After validated against ground-based measurements, MODIS GPP product (MOD17A2H) and results reported in recent literature, we found the MuSyQ-NPP algorithm could yield an RMSE of 2.973 gC m(-2) d(-1) and an R of 0.842 when compared with ground-based GPP while an RMSE of 8.010 gC m(-2) d(-1) and an R of 0.682 can be achieved for MODIS GPP, the estimated NPP values were also well within the range of previous literature, which proved the reliability of our modelling results. This research suggested that the utilization of multi-source data with various scales would help to the establishment of an appropriate model for calculating GPP and NPP at regional scales with relatively high spatial and temporal resolution.
Cui, Tianxiang; Wang, Yujie; Sun, Rui; Qiao, Chen; Fan, Wenjie; Jiang, Guoqing; Hao, Lvyuan; Zhang, Lei
2016-01-01
Estimating gross primary production (GPP) and net primary production (NPP) are significant important in studying carbon cycles. Using models driven by multi-source and multi-scale data is a promising approach to estimate GPP and NPP at regional and global scales. With a focus on data that are openly accessible, this paper presents a GPP and NPP model driven by remotely sensed data and meteorological data with spatial resolutions varying from 30 m to 0.25 degree and temporal resolutions ranging from 3 hours to 1 month, by integrating remote sensing techniques and eco-physiological process theories. Our model is also designed as part of the Multi-source data Synergized Quantitative (MuSyQ) Remote Sensing Production System. In the presented MuSyQ-NPP algorithm, daily GPP for a 10-day period was calculated as a product of incident photosynthetically active radiation (PAR) and its fraction absorbed by vegetation (FPAR) using a light use efficiency (LUE) model. The autotrophic respiration (Ra) was determined using eco-physiological process theories and the daily NPP was obtained as the balance between GPP and Ra. To test its feasibility at regional scales, our model was performed in an arid and semi-arid region of Heihe River Basin, China to generate daily GPP and NPP during the growing season of 2012. The results indicated that both GPP and NPP exhibit clear spatial and temporal patterns in their distribution over Heihe River Basin during the growing season due to the temperature, water and solar influx conditions. After validated against ground-based measurements, MODIS GPP product (MOD17A2H) and results reported in recent literature, we found the MuSyQ-NPP algorithm could yield an RMSE of 2.973 gC m-2 d-1 and an R of 0.842 when compared with ground-based GPP while an RMSE of 8.010 gC m-2 d-1 and an R of 0.682 can be achieved for MODIS GPP, the estimated NPP values were also well within the range of previous literature, which proved the reliability of our modelling results. This research suggested that the utilization of multi-source data with various scales would help to the establishment of an appropriate model for calculating GPP and NPP at regional scales with relatively high spatial and temporal resolution. PMID:27088356
Tang, Jun; Yao, Yibin; Zhang, Liang; Kong, Jian
2015-01-01
The insufficiency of data is the essential reason for ill-posed problem existed in computerized ionospheric tomography (CIT) technique. Therefore, the method of integrating multi-source data is proposed. Currently, the multiple satellite navigation systems and various ionospheric observing instruments provide abundant data which can be employed to reconstruct ionospheric electron density (IED). In order to improve the vertical resolution of IED, we do research on IED reconstruction by integration of ground-based GPS data, occultation data from the LEO satellite, satellite altimetry data from Jason-1 and Jason-2 and ionosonde data. We used the CIT results to compare with incoherent scatter radar (ISR) observations, and found that the multi-source data fusion was effective and reliable to reconstruct electron density, showing its superiority than CIT with GPS data alone. PMID:26266764
Tang, Jun; Yao, Yibin; Zhang, Liang; Kong, Jian
2015-08-12
The insufficiency of data is the essential reason for ill-posed problem existed in computerized ionospheric tomography (CIT) technique. Therefore, the method of integrating multi-source data is proposed. Currently, the multiple satellite navigation systems and various ionospheric observing instruments provide abundant data which can be employed to reconstruct ionospheric electron density (IED). In order to improve the vertical resolution of IED, we do research on IED reconstruction by integration of ground-based GPS data, occultation data from the LEO satellite, satellite altimetry data from Jason-1 and Jason-2 and ionosonde data. We used the CIT results to compare with incoherent scatter radar (ISR) observations, and found that the multi-source data fusion was effective and reliable to reconstruct electron density, showing its superiority than CIT with GPS data alone.
Multisource Data Integration in Remote Sensing
NASA Technical Reports Server (NTRS)
Tilton, James C. (Editor)
1991-01-01
Papers presented at the workshop on Multisource Data Integration in Remote Sensing are compiled. The full text of these papers is included. New instruments and new sensors are discussed that can provide us with a large variety of new views of the real world. This huge amount of data has to be combined and integrated in a (computer-) model of this world. Multiple sources may give complimentary views of the world - consistent observations from different (and independent) data sources support each other and increase their credibility, while contradictions may be caused by noise, errors during processing, or misinterpretations, and can be identified as such. As a consequence, integration results are very reliable and represent a valid source of information for any geographical information system.
NASA Astrophysics Data System (ADS)
Huang, W.; Jiang, J.; Zha, Z.; Zhang, H.; Wang, C.; Zhang, J.
2014-04-01
Geospatial data resources are the foundation of the construction of geo portal which is designed to provide online geoinformation services for the government, enterprise and public. It is vital to keep geospatial data fresh, accurate and comprehensive in order to satisfy the requirements of application and development of geographic location, route navigation, geo search and so on. One of the major problems we are facing is data acquisition. For us, integrating multi-sources geospatial data is the mainly means of data acquisition. This paper introduced a practice integration approach of multi-source geospatial data with different data model, structure and format, which provided the construction of National Geospatial Information Service Platform of China (NGISP) with effective technical supports. NGISP is the China's official geo portal which provides online geoinformation services based on internet, e-government network and classified network. Within the NGISP architecture, there are three kinds of nodes: national, provincial and municipal. Therefore, the geospatial data is from these nodes and the different datasets are heterogeneous. According to the results of analysis of the heterogeneous datasets, the first thing we do is to define the basic principles of data fusion, including following aspects: 1. location precision; 2.geometric representation; 3. up-to-date state; 4. attribute values; and 5. spatial relationship. Then the technical procedure is researched and the method that used to process different categories of features such as road, railway, boundary, river, settlement and building is proposed based on the principles. A case study in Jiangsu province demonstrated the applicability of the principle, procedure and method of multi-source geospatial data integration.
The Finnish multisource national forest inventory: small-area estimation and map production
Erkki Tomppo
2009-01-01
A driving force motivating development of the multisource national forest inventory (MS-NFI) in connection with the Finnish national forest inventory (NFI) was the desire to obtain forest resource information for smaller areas than is possible using field data only without significantly increasing the cost of the inventory. A basic requirement for the method was that...
Multisource geological data mining and its utilization of uranium resources exploration
NASA Astrophysics Data System (ADS)
Zhang, Jie-lin
2009-10-01
Nuclear energy as one of clear energy sources takes important role in economic development in CHINA, and according to the national long term development strategy, many more nuclear powers will be built in next few years, so it is a great challenge for uranium resources exploration. Research and practice on mineral exploration demonstrates that utilizing the modern Earth Observe System (EOS) technology and developing new multi-source geological data mining methods are effective approaches to uranium deposits prospecting. Based on data mining and knowledge discovery technology, this paper uses multi-source geological data to character electromagnetic spectral, geophysical and spatial information of uranium mineralization factors, and provides the technical support for uranium prospecting integrating with field remote sensing geological survey. Multi-source geological data used in this paper include satellite hyperspectral image (Hyperion), high spatial resolution remote sensing data, uranium geological information, airborne radiometric data, aeromagnetic and gravity data, and related data mining methods have been developed, such as data fusion of optical data and Radarsat image, information integration of remote sensing and geophysical data, and so on. Based on above approaches, the multi-geoscience information of uranium mineralization factors including complex polystage rock mass, mineralization controlling faults and hydrothermal alterations have been identified, the metallogenic potential of uranium has been evaluated, and some predicting areas have been located.
Multisource passive acoustic tracking: an application of random finite set data fusion
NASA Astrophysics Data System (ADS)
Ali, Andreas M.; Hudson, Ralph E.; Lorenzelli, Flavio; Yao, Kung
2010-04-01
Multisource passive acoustic tracking is useful in animal bio-behavioral study by replacing or enhancing human involvement during and after field data collection. Multiple simultaneous vocalizations are a common occurrence in a forest or a jungle, where many species are encountered. Given a set of nodes that are capable of producing multiple direction-of-arrivals (DOAs), such data needs to be combined into meaningful estimates. Random Finite Set provides the mathematical probabilistic model, which is suitable for analysis and optimal estimation algorithm synthesis. Then the proposed algorithm has been verified using a simulation and a controlled test experiment.
Advances in audio source seperation and multisource audio content retrieval
NASA Astrophysics Data System (ADS)
Vincent, Emmanuel
2012-06-01
Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.
NASA Astrophysics Data System (ADS)
Loubet, Benjamin; Carozzi, Marco
2015-04-01
Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28 days. The meteorological dataset of the fluxnet FR-Gri site (Grignon, FR) in 2008 was employed. Several sensor heights were tested, from 0.25 m to 2 m. The multi-source inverse problem was solved based on several sampling and field trial strategies: considering 1 or 2 heights over each field, considering the background concentration as known or unknown, and considering block-repetitions in the field set-up (3 repetitions). The inverse modelling approach demonstrated to be adapted for discriminating large differences in NH3 emissions from small agronomic plots using integrating sensors. The method is sensitive to sensor heights. The uncertainties and systematic biases are evaluated and discussed.
NASA Astrophysics Data System (ADS)
Pan, X. G.; Wang, J. Q.; Zhou, H. Y.
2013-05-01
The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.
Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.
Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua
2018-02-01
Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.
Evaluation of the Maximum Allowable Cost Program
Lee, A. James; Hefner, Dennis; Dobson, Allen; Hardy, Ralph
1983-01-01
This article summarizes an evaluation of the Maximum Allowable Cost (MAC)-Estimated Acquisition Cost (EAC) program, the Federal Government's cost-containment program for prescription drugs.1 The MAC-EAC regulations which became effective on August 26, 1976, have four major components: (1) Maximum Allowable Cost reimbursement limits for selected multisource or generically available drugs; (2) Estimated Acquisition Cost reimbursement limits for all drugs; (3) “usual and customary” reimbursement limits for all drugs; and (4) a directive that professional fee studies be performed by each State. The study examines the benefits and costs of the MAC reimbursement limits for 15 dosage forms of five multisource drugs and EAC reimbursement limits for all drugs for five selected States as of 1979. PMID:10309857
A computer vision system for the recognition of trees in aerial photographs
NASA Technical Reports Server (NTRS)
Pinz, Axel J.
1991-01-01
Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.
A Bayesian approach to multisource forest area estimation
Andrew O. Finley
2007-01-01
In efforts such as land use change monitoring, carbon budgeting, and forecasting ecological conditions and timber supply, demand is increasing for regional and national data layers depicting forest cover. These data layers must permit small area estimates of forest and, most importantly, provide associated error estimates. This paper presents a model-based approach for...
NASA Astrophysics Data System (ADS)
Han, P.; Long, D.
2017-12-01
Snow water equivalent (SWE) and total water storage (TWS) changes are important hydrological state variables over cryospheric regions, such as China's Upper Yangtze River (UYR) basin. Accurate simulation of these two state variables plays a critical role in understanding hydrological processes over this region and, in turn, benefits water resource management, hydropower development, and ecological integrity over the lower reaches of the Yangtze River, one of the largest rivers globally. In this study, an improved CREST model coupled with a snow and glacier melting module was used to simulate SWE and TWS changes over the UYR, and to quantify contributions of snow and glacier meltwater to the total runoff. Forcing, calibration, and validation data are mainly from multi-source remote sensing observations, including satellite-based precipitation estimates, passive microwave remote sensing-based SWE, and GRACE-derived TWS changes, along with streamflow measurements at the Zhimenda gauging station. Results show that multi-source remote sensing information can be extremely valuable in model forcing, calibration, and validation over the poorly gauged region. The simulated SWE and TWS changes and the observed counterparts are highly consistent, showing NSE coefficients higher than 0.8. The results also show that the contributions of snow and glacier meltwater to the total runoff are 8% and 6%, respectively, during the period 2003‒2014, which is an important source of runoff. Moreover, from this study, the TWS is found to increase at a rate of 5 mm/a ( 0.72 Gt/a) for the period 2003‒2014. The snow melting module may overestimate SWE for high precipitation events and was improved in this study. Key words: CREST model; Remote Sensing; Melting model; Source Region of the Yangtze River
Integrated Dynamic Transit Operations (IDTO) concept of operations.
DOT National Transportation Integrated Search
2012-05-01
In support of USDOTs Intelligent Transportation Systems (ITS) Mobility Program, the Dynamic Mobility Applications (DMA) program seeks to create applications that fully leverage frequently collected and rapidly disseminated multi-source data gat...
Application of Ontology Technology in Health Statistic Data Analysis.
Guo, Minjiang; Hu, Hongpu; Lei, Xingyun
2017-01-01
Research Purpose: establish health management ontology for analysis of health statistic data. Proposed Methods: this paper established health management ontology based on the analysis of the concepts in China Health Statistics Yearbook, and used protégé to define the syntactic and semantic structure of health statistical data. six classes of top-level ontology concepts and their subclasses had been extracted and the object properties and data properties were defined to establish the construction of these classes. By ontology instantiation, we can integrate multi-source heterogeneous data and enable administrators to have an overall understanding and analysis of the health statistic data. ontology technology provides a comprehensive and unified information integration structure of the health management domain and lays a foundation for the efficient analysis of multi-source and heterogeneous health system management data and enhancement of the management efficiency.
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
NASA Astrophysics Data System (ADS)
Dube, Timothy; Sibanda, Mbulisi; Shoko, Cletah; Mutanga, Onisimo
2017-10-01
Forest stand volume is one of the crucial stand parameters, which influences the ability of these forests to provide ecosystem goods and services. This study thus aimed at examining the potential of integrating multispectral SPOT 5 image, with ancillary data (forest age and rainfall metrics) in estimating stand volume between coppiced and planted Eucalyptus spp. in KwaZulu-Natal, South Africa. To achieve this objective, Partial Least Squares Regression (PLSR) algorithm was used. The PLSR algorithm was implemented by applying three tier analysis stages: stage I: using ancillary data as an independent dataset, stage II: SPOT 5 spectral bands as an independent dataset and stage III: combined SPOT 5 spectral bands and ancillary data. The results of the study showed that the use of an independent ancillary dataset better explained the volume of Eucalyptus spp. growing from coppices (adjusted R2 (R2Adj) = 0.54, RMSEP = 44.08 m3/ha), when compared with those that were planted (R2Adj = 0.43, RMSEP = 53.29 m3/ha). Similar results were also observed when SPOT 5 spectral bands were applied as an independent dataset, whereas improved volume estimates were produced when using combined dataset. For instance, planted Eucalyptus spp. were better predicted adjusted R2 (R2Adj) = 0.77, adjusted R2Adj = 0.59, RMSEP = 36.02 m3/ha) when compared with those that grow from coppices (R2 = 0.76, R2Adj = 0.46, RMSEP = 40.63 m3/ha). Overall, the findings of this study demonstrated the relevance of multi-source data in ecosystems modelling.
Test readiness assessment summary for Integrated Dynamic Transit Operations (IDTO).
DOT National Transportation Integrated Search
2012-10-01
In support of USDOTs Intelligent Transportation Systems (ITS) Mobility Program, the Dynamic Mobility Applications (DMA) program seeks to create applications that fully leverage frequently collected and rapidly disseminated multi-source data gat...
NASA Astrophysics Data System (ADS)
Camporese, M.; Botto, A.
2017-12-01
Data assimilation is becoming increasingly popular in hydrological and earth system modeling, as it allows for direct integration of multisource observation data in modeling predictions and uncertainty reduction. For this reason, data assimilation has been recently the focus of much attention also for integrated surface-subsurface hydrological models, whereby multiple terrestrial compartments (e.g., snow cover, surface water, groundwater) are solved simultaneously, in an attempt to tackle environmental problems in a holistic approach. Recent examples include the joint assimilation of water table, soil moisture, and river discharge measurements in catchment models of coupled surface-subsurface flow using the ensemble Kalman filter (EnKF). Although the EnKF has been specifically developed to deal with nonlinear models, integrated hydrological models based on the Richards equation still represent a challenge, due to strong nonlinearities that may significantly affect the filter performance. Thus, more studies are needed to investigate the capabilities of EnKF to correct the system state and identify parameters in cases where the unsaturated zone dynamics are dominant. Here, the model CATHY (CATchment HYdrology) is applied to reproduce the hydrological dynamics observed in an experimental hillslope, equipped with tensiometers, water content reflectometer probes, and tipping bucket flow gages to monitor the hillslope response to a series of artificial rainfall events. We assimilate pressure head, soil moisture, and subsurface outflow with EnKF in a number of assimilation scenarios and discuss the challenges, issues, and tradeoffs arising from the assimilation of multisource data in a real-world test case, with particular focus on the capability of DA to update the subsurface parameters.
Molina, Iñigo; Martinez, Estibaliz; Arquero, Agueda; Pajares, Gonzalo; Sanchez, Javier
2012-01-01
Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth’s resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution. PMID:22737023
Molina, Iñigo; Martinez, Estibaliz; Arquero, Agueda; Pajares, Gonzalo; Sanchez, Javier
2012-01-01
Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth's resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution.
Estimating average alcohol consumption in the population using multiple sources: the case of Spain.
Sordo, Luis; Barrio, Gregorio; Bravo, María J; Villalbí, Joan R; Espelt, Albert; Neira, Montserrat; Regidor, Enrique
2016-01-01
National estimates on per capita alcohol consumption are provided regularly by various sources and may have validity problems, so corrections are needed for monitoring and assessment purposes. Our objectives were to compare different alcohol availability estimates for Spain, to build the best estimate (actual consumption), characterize its time trend during 2001-2011, and quantify the extent to which other estimates (coverage) approximated actual consumption. Estimates were: alcohol availability from the Spanish Tax Agency (Tax Agency availability), World Health Organization (WHO availability) and other international agencies, self-reported purchases from the Spanish Food Consumption Panel, and self-reported consumption from population surveys. Analyses included calculating: between-agency discrepancy in availability, multisource availability (correcting Tax Agency availability by underestimation of wine and cider), actual consumption (adjusting multisource availability by unrecorded alcohol consumption/purchases and alcohol losses), and coverage of selected estimates. Sensitivity analyses were undertaken. Time trends were characterized by joinpoint regression. Between-agency discrepancy in alcohol availability remained high in 2011, mainly because of wine and spirits, although some decrease was observed during the study period. The actual consumption was 9.5 l of pure alcohol/person-year in 2011, decreasing 2.3 % annually, mainly due to wine and spirits. 2011 coverage of WHO availability, Tax Agency availability, self-reported purchases, and self-reported consumption was 99.5, 99.5, 66.3, and 28.0 %, respectively, generally with downward trends (last three estimates, especially self-reported consumption). The multisource availability overestimated actual consumption by 12.3 %, mainly due to tourism imbalance. Spanish estimates of per capita alcohol consumption show considerable weaknesses. Using uncorrected estimates, especially self-reported consumption, for monitoring or other purposes is misleading. To obtain conservative estimates of alcohol-attributable disease burden or heavy drinking prevalence, self-reported consumption should be shifted upwards by more than 85 % (91 % in 2011) of Tax Agency or WHO availability figures. The weaknesses identified can probably also be found worldwide, thus much empirical work remains to be done to improve estimates of per capita alcohol consumption.
Challenges with secondary use of multi-source water-quality data in the United States
Sprague, Lori A.; Oelsner, Gretchen P.; Argue, Denise M.
2017-01-01
Combining water-quality data from multiple sources can help counterbalance diminishing resources for stream monitoring in the United States and lead to important regional and national insights that would not otherwise be possible. Individual monitoring organizations understand their own data very well, but issues can arise when their data are combined with data from other organizations that have used different methods for reporting the same common metadata elements. Such use of multi-source data is termed “secondary use”—the use of data beyond the original intent determined by the organization that collected the data. In this study, we surveyed more than 25 million nutrient records collected by 488 organizations in the United States since 1899 to identify major inconsistencies in metadata elements that limit the secondary use of multi-source data. Nearly 14.5 million of these records had missing or ambiguous information for one or more key metadata elements, including (in decreasing order of records affected) sample fraction, chemical form, parameter name, units of measurement, precise numerical value, and remark codes. As a result, metadata harmonization to make secondary use of these multi-source data will be time consuming, expensive, and inexact. Different data users may make different assumptions about the same ambiguous data, potentially resulting in different conclusions about important environmental issues. The value of these ambiguous data is estimated at \\$US12 billion, a substantial collective investment by water-resource organizations in the United States. By comparison, the value of unambiguous data is estimated at \\$US8.2 billion. The ambiguous data could be preserved for uses beyond the original intent by developing and implementing standardized metadata practices for future and legacy water-quality data throughout the United States.
[Estimation of desert vegetation coverage based on multi-source remote sensing data].
Wan, Hong-Mei; Li, Xia; Dong, Dao-Rui
2012-12-01
Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study areaAbstract: Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study area and based on the ground investigation and the multi-source remote sensing data of different resolutions, the estimation models for desert vegetation coverage were built, with the precisions of different estimation methods and models compared. The results showed that with the increasing spatial resolution of remote sensing data, the precisions of the estimation models increased. The estimation precision of the models based on the high, middle-high, and middle-low resolution remote sensing data was 89.5%, 87.0%, and 84.56%, respectively, and the precisions of the remote sensing models were higher than that of vegetation index method. This study revealed the change patterns of the estimation precision of desert vegetation coverage based on different spatial resolution remote sensing data, and realized the quantitative conversion of the parameters and scales among the high, middle, and low spatial resolution remote sensing data of desert vegetation coverage, which would provide direct evidence for establishing and implementing comprehensive remote sensing monitoring scheme for the ecological restoration in the study area.
Prediction With Dimension Reduction of Multiple Molecular Data Sources for Patient Survival.
Kaplan, Adam; Lock, Eric F
2017-01-01
Predictive modeling from high-dimensional genomic data is often preceded by a dimension reduction step, such as principal component analysis (PCA). However, the application of PCA is not straightforward for multisource data, wherein multiple sources of 'omics data measure different but related biological components. In this article, we use recent advances in the dimension reduction of multisource data for predictive modeling. In particular, we apply exploratory results from Joint and Individual Variation Explained (JIVE), an extension of PCA for multisource data, for prediction of differing response types. We conduct illustrative simulations to illustrate the practical advantages and interpretability of our approach. As an application example, we consider predicting survival for patients with glioblastoma multiforme from 3 data sources measuring messenger RNA expression, microRNA expression, and DNA methylation. We also introduce a method to estimate JIVE scores for new samples that were not used in the initial dimension reduction and study its theoretical properties; this method is implemented in the R package R.JIVE on CRAN, in the function jive.predict.
Mashup Scheme Design of Map Tiles Using Lightweight Open Source Webgis Platform
NASA Astrophysics Data System (ADS)
Hu, T.; Fan, J.; He, H.; Qin, L.; Li, G.
2018-04-01
To address the difficulty involved when using existing commercial Geographic Information System platforms to integrate multi-source image data fusion, this research proposes the loading of multi-source local tile data based on CesiumJS and examines the tile data organization mechanisms and spatial reference differences of the CesiumJS platform, as well as various tile data sources, such as Google maps, Map World, and Bing maps. Two types of tile data loading schemes have been designed for the mashup of tiles, the single data source loading scheme and the multi-data source loading scheme. The multi-sources of digital map tiles used in this paper cover two different but mainstream spatial references, the WGS84 coordinate system and the Web Mercator coordinate system. According to the experimental results, the single data source loading scheme and the multi-data source loading scheme with the same spatial coordinate system showed favorable visualization effects; however, the multi-data source loading scheme was prone to lead to tile image deformation when loading multi-source tile data with different spatial references. The resulting method provides a low cost and highly flexible solution for small and medium-scale GIS programs and has a certain potential for practical application values. The problem of deformation during the transition of different spatial references is an important topic for further research.
DOT National Transportation Integrated Search
2012-03-01
In support of USDOTs Intelligent Transportation Systems (ITS) Mobility Program, the Dynamic Mobility Applications (DMA) program seeks to create applications that fully leverage frequently collected and rapidly disseminated multi-source data gat...
Fusion of multi-source remote sensing data for agriculture monitoring tasks
NASA Astrophysics Data System (ADS)
Skakun, S.; Franch, B.; Vermote, E.; Roger, J. C.; Becker Reshef, I.; Justice, C. O.; Masek, J. G.; Murphy, E.
2016-12-01
Remote sensing data is essential source of information for enabling monitoring and quantification of crop state at global and regional scales. Crop mapping, state assessment, area estimation and yield forecasting are the main tasks that are being addressed within GEO-GLAM. Efficiency of agriculture monitoring can be improved when heterogeneous multi-source remote sensing datasets are integrated. Here, we present several case studies of utilizing MODIS, Landsat-8 and Sentinel-2 data along with meteorological data (growing degree days - GDD) for winter wheat yield forecasting, mapping and area estimation. Archived coarse spatial resolution data, such as MODIS, VIIRS and AVHRR, can provide daily global observations that coupled with statistical data on crop yield can enable the development of empirical models for timely yield forecasting at national level. With the availability of high-temporal and high spatial resolution Landsat-8 and Sentinel-2A imagery, course resolution empirical yield models can be downscaled to provide yield estimates at regional and field scale. In particular, we present the case study of downscaling the MODIS CMG based generalized winter wheat yield forecasting model to high spatial resolution data sets, namely harmonized Landsat-8 - Sentinel-2A surface reflectance product (HLS). Since the yield model requires corresponding in season crop masks, we propose an automatic approach to extract winter crop maps from MODIS NDVI and MERRA2 derived GDD using Gaussian mixture model (GMM). Validation for the state of Kansas (US) and Ukraine showed that the approach can yield accuracies > 90% without using reference (ground truth) data sets. Another application of yearly derived winter crop maps is their use for stratification purposes within area frame sampling for crop area estimation. In particular, one can simulate the dependence of error (coefficient of variation) on the number of samples and strata size. This approach was used for estimating the area of winter crops in Ukraine for 2013-2016. The GMM-GDD approach is further extended for HLS data to provide automatic winter crop mapping at 30 m resolution for crop yield model and area estimation. In case of persistent cloudiness, addition of Sentinel-1A synthetic aperture radar (SAR) images is explored for automatic winter crop mapping.
School adjustment of children in residential care: a multi-source analysis.
Martín, Eduardo; Muñoz de Bustillo, María del Carmen
2009-11-01
School adjustment is one the greatest challenges in residential child care programs. This study has two aims: to analyze school adjustment compared to a normative population, and to carry out a multi-source analysis (child, classmates, and teacher) of this adjustment. A total of 50 classrooms containing 60 children from residential care units were studied. The "Método de asignación de atributos perceptivos" (Allocation of perceptive attributes; Díaz-Aguado, 2006), the "Test Autoevaluativo Multifactorial de Adaptación Infantil" (TAMAI [Multifactor Self-assessment Test of Child Adjustment]; Hernández, 1996) and the "Protocolo de valoración para el profesorado (Evaluation Protocol for Teachers; Fernández del Valle, 1998) were applied. The main results indicate that, compared with their classmates, children in residential care are perceived as more controversial and less integrated at school, although no differences were observed in problems of isolation. The multi-source analysis shows that there is agreement among the different sources when the externalized and visible aspects are evaluated. These results are discussed in connection with the practices that are being developed in residential child care programs.
NASA Technical Reports Server (NTRS)
Brooks, Colin; Bourgeau-Chavez, Laura; Endres, Sarah; Battaglia, Michael; Shuchman, Robert
2015-01-01
Assist with the evaluation and measuring of wetlands hydroperiod at the Plum Brook Station using multi-source remote sensing data as part of a larger effort on projecting climate change-related impacts on the station's wetland ecosystems. MTRI expanded on the multi-source remote sensing capabilities to help estimate and measure hydroperiod and the relative soil moisture of wetlands at NASA's Plum Brook Station. Multi-source remote sensing capabilities are useful in estimating and measuring hydroperiod and relative soil moisture of wetlands. This is important as a changing regional climate has several potential risks for wetland ecosystem function. The year two analysis built on the first year of the project by acquiring and analyzing remote sensing data for additional dates and types of imagery, combined with focused field work. Five deliverables were planned and completed: (1) Show the relative length of hydroperiod using available remote sensing datasets, (2) Date linked table of wetlands extent over time for all feasible non-forested wetlands, (3) Utilize LIDAR data to measure topographic height above sea level of all wetlands, wetland to catchment area radio, slope of wetlands, and other useful variables (4), A demonstration of how analyzed results from multiple remote sensing data sources can help with wetlands vulnerability assessment; and (5) A MTRI style report summarizing year 2 results.
DOT National Transportation Integrated Search
2012-08-01
In support of USDOTs Intelligent Transportation Systems (ITS) Mobility Program, the Dynamic Mobility Applications (DMA) program seeks to create applications that fully leverage frequently collected and rapidly disseminated multi-source data gat...
DOT National Transportation Integrated Search
2011-11-01
In support of USDOTs Intelligent Transportation Systems (ITS) Mobility Program, the Dynamic Mobility Applications (DMA) program seeks to create applications that fully leverage frequently collected and rapidly disseminated multi-source data gat...
Hill, Jacqueline J; Asprey, Anthea; Richards, Suzanne H; Campbell, John L
2012-01-01
Background UK revalidation plans for doctors include obtaining multisource feedback from patient and colleague questionnaires as part of the supporting information for appraisal and revalidation. Aim To investigate GPs' and appraisers' views of using multisource feedback data in appraisal, and of the emerging links between multisource feedback, appraisal, and revalidation. Design and setting A qualitative study in UK general practice. Method In total, 12 GPs who had recently completed the General Medical Council multisource feedback questionnaires and 12 appraisers undertook a semi-structured, telephone interview. A thematic analysis was performed. Results Participants supported multisource feedback for formative development, although most expressed concerns about some elements of its methodology (for example, ‘self’ selection of colleagues, or whether patients and colleagues can provide objective feedback). Some participants reported difficulties in understanding benchmark data and some were upset by their scores. Most accepted the links between appraisal and revalidation, and that multisource feedback could make a positive contribution. However, tensions between the formative processes of appraisal and the summative function of revalidation were identified. Conclusion Participants valued multisource feedback as part of formative assessment and saw a role for it in appraisal. However, concerns about some elements of multisource feedback methodology may undermine its credibility as a tool for identifying poor performance. Proposals linking multisource feedback, appraisal, and revalidation may limit the use of multisource feedback and appraisal for learning and development by some doctors. Careful consideration is required with respect to promoting the accuracy and credibility of such feedback processes so that their use for learning and development, and for revalidation, is maximised. PMID:22546590
Hill, Jacqueline J; Asprey, Anthea; Richards, Suzanne H; Campbell, John L
2012-05-01
UK revalidation plans for doctors include obtaining multisource feedback from patient and colleague questionnaires as part of the supporting information for appraisal and revalidation. To investigate GPs' and appraisers' views of using multisource feedback data in appraisal, and of the emerging links between multisource feedback, appraisal, and revalidation. A qualitative study in UK general practice. In total, 12 GPs who had recently completed the General Medical Council multisource feedback questionnaires and 12 appraisers undertook a semi-structured, telephone interview. A thematic analysis was performed. Participants supported multisource feedback for formative development, although most expressed concerns about some elements of its methodology (for example, 'self' selection of colleagues, or whether patients and colleagues can provide objective feedback). Some participants reported difficulties in understanding benchmark data and some were upset by their scores. Most accepted the links between appraisal and revalidation, and that multisource feedback could make a positive contribution. However, tensions between the formative processes of appraisal and the summative function of revalidation were identified. Participants valued multisource feedback as part of formative assessment and saw a role for it in appraisal. However, concerns about some elements of multisource feedback methodology may undermine its credibility as a tool for identifying poor performance. Proposals linking multisource feedback, appraisal, and revalidation may limit the use of multisource feedback and appraisal for learning and development by some doctors. Careful consideration is required with respect to promoting the accuracy and credibility of such feedback processes so that their use for learning and development, and for revalidation, is maximised.
Revisions to the JDL data fusion model
NASA Astrophysics Data System (ADS)
Steinberg, Alan N.; Bowman, Christopher L.; White, Franklin E.
1999-03-01
The Data Fusion Model maintained by the Joint Directors of Laboratories (JDL) Data Fusion Group is the most widely-used method for categorizing data fusion-related functions. This paper discusses the current effort to revise the expand this model to facilitate the cost-effective development, acquisition, integration and operation of multi- sensor/multi-source systems. Data fusion involves combining information - in the broadest sense - to estimate or predict the state of some aspect of the universe. These may be represented in terms of attributive and relational states. If the job is to estimate the state of a people, it can be useful to include consideration of informational and perceptual states in addition to the physical state. Developing cost-effective multi-source information systems requires a method for specifying data fusion processing and control functions, interfaces, and associate databases. The lack of common engineering standards for data fusion systems has been a major impediment to integration and re-use of available technology: current developments do not lend themselves to objective evaluation, comparison or re-use. This paper reports on proposed revisions and expansions of the JDL Data FUsion model to remedy some of these deficiencies. This involves broadening the functional model and related taxonomy beyond the original military focus, and integrating the Data Fusion Tree Architecture model for system description, design and development.
NASA Astrophysics Data System (ADS)
Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen
2018-01-01
Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.
The design and implementation of hydrographical information management system (HIMS)
NASA Astrophysics Data System (ADS)
Sui, Haigang; Hua, Li; Wang, Qi; Zhang, Anming
2005-10-01
With the development of hydrographical work and information techniques, the large variety of hydrographical information including electronic charts, documents and other materials are widely used, and the traditional management mode and techniques are unsuitable for the development of the Chinese Marine Safety Administration Bureau (CMSAB). How to manage all kinds of hydrographical information has become an important and urgent problem. A lot of advanced techniques including GIS, RS, spatial database management and VR techniques are introduced for solving these problems. Some design principles and key techniques of the HIMS including the mixed mode base on B/S, C/S and stand-alone computer mode, multi-source & multi-scale data organization and management, multi-source data integration and diverse visualization of digital chart, efficient security control strategies are illustrated in detail. Based on the above ideas and strategies, an integrated system named Hydrographical Information Management System (HIMS) was developed. And the HIMS has been applied in the Shanghai Marine Safety Administration Bureau and obtained good evaluation.
Multisource Data Classification Using A Hybrid Semi-supervised Learning Scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju; Bhaduri, Budhendra L; Shekhar, Shashi
2009-01-01
In many practical situations thematic classes can not be discriminated by spectral measurements alone. Often one needs additional features such as population density, road density, wetlands, elevation, soil types, etc. which are discrete attributes. On the other hand remote sensing image features are continuous attributes. Finding a suitable statistical model and estimation of parameters is a challenging task in multisource (e.g., discrete and continuous attributes) data classification. In this paper we present a semi-supervised learning method by assuming that the samples were generated by a mixture model, where each component could be either a continuous or discrete distribution. Overall classificationmore » accuracy of the proposed method is improved by 12% in our initial experiments.« less
Castro, Eduardo; Martínez-Ramón, Manel; Pearlson, Godfrey; Sui, Jing; Calhoun, Vince D.
2011-01-01
Pattern classification of brain imaging data can enable the automatic detection of differences in cognitive processes of specific groups of interest. Furthermore, it can also give neuroanatomical information related to the regions of the brain that are most relevant to detect these differences by means of feature selection procedures, which are also well-suited to deal with the high dimensionality of brain imaging data. This work proposes the application of recursive feature elimination using a machine learning algorithm based on composite kernels to the classification of healthy controls and patients with schizophrenia. This framework, which evaluates nonlinear relationships between voxels, analyzes whole-brain fMRI data from an auditory task experiment that is segmented into anatomical regions and recursively eliminates the uninformative ones based on their relevance estimates, thus yielding the set of most discriminative brain areas for group classification. The collected data was processed using two analysis methods: the general linear model (GLM) and independent component analysis (ICA). GLM spatial maps as well as ICA temporal lobe and default mode component maps were then input to the classifier. A mean classification accuracy of up to 95% estimated with a leave-two-out cross-validation procedure was achieved by doing multi-source data classification. In addition, it is shown that the classification accuracy rate obtained by using multi-source data surpasses that reached by using single-source data, hence showing that this algorithm takes advantage of the complimentary nature of GLM and ICA. PMID:21723948
Wang, Bao-Zhen; Chen, Zhi
2013-01-01
This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.
NASA Technical Reports Server (NTRS)
Kim, H.; Swain, P. H.
1991-01-01
A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.
Estimating error cross-correlations in soil moisture data sets using extended collocation analysis
USDA-ARS?s Scientific Manuscript database
Consistent global soil moisture records are essential for studying the role of hydrologic processes within the larger earth system. Various studies have shown the benefit of assimilating satellite-based soil moisture data into water balance models or merging multi-source soil moisture retrievals int...
Full Waveform Inversion with Multisource Frequency Selection of Marine Streamer Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yunsong; Schuster, Gerard T.
The theory and practice of multisource full waveform inversion of marine supergathers are described with a frequency-selection strategy. The key enabling property of frequency selection is that it eliminates the crosstalk among sources, thus overcoming the aperture mismatch of marine multisource inversion. Tests on multisource full waveform inversion of synthetic marine data and Gulf of Mexico data show speedups of 4× and 8×, respectively, compared to conventional full waveform inversion.
Full Waveform Inversion with Multisource Frequency Selection of Marine Streamer Data
Huang, Yunsong; Schuster, Gerard T.
2017-10-26
The theory and practice of multisource full waveform inversion of marine supergathers are described with a frequency-selection strategy. The key enabling property of frequency selection is that it eliminates the crosstalk among sources, thus overcoming the aperture mismatch of marine multisource inversion. Tests on multisource full waveform inversion of synthetic marine data and Gulf of Mexico data show speedups of 4× and 8×, respectively, compared to conventional full waveform inversion.
Distributed software framework and continuous integration in hydroinformatics systems
NASA Astrophysics Data System (ADS)
Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao
2017-08-01
When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.
Multisource feedback to graduate nurses: a multimethod study.
McPhee, Samantha; Phillips, Nicole M; Ockerby, Cherene; Hutchinson, Alison M
2017-11-01
(1) To explore graduate nurses' perceptions of the influence of multisource feedback on their performance and (2) to explore perceptions of Clinical Nurse Educators involved in providing feedback regarding feasibility and benefit of the approach. Graduate registered nurses are expected to provide high-quality care for patients in demanding and unpredictable clinical environments. Receiving feedback is essential to their development. Performance appraisals are a common method used to provide feedback and typically involve a single source of feedback. Alternatively, multisource feedback allows the learner to gain insight into performance from a variety of perspectives. This study explores multisource feedback in an Australian setting within the graduate nurse context. Multimethod study. Eleven graduates were given structured performance feedback from four raters: Nurse Unit Manager, Clinical Nurse Educator, preceptor and a self-appraisal. Thirteen graduates received standard single-rater appraisals. Data regarding perceptions of feedback for both groups were obtained using a questionnaire. Semistructured interviews were conducted with nurses who received multisource feedback and the educators. In total, 94% (n = 15) of survey respondents perceived feedback was important during the graduate year. Four themes emerged from interviews: informal feedback, appropriateness of raters, elements of delivery and creating an appraisal process that is 'more real'. Multisource feedback was perceived as more beneficial compared to single-rater feedback. Educators saw value in multisource feedback; however, perceived barriers were engaging raters and collating feedback. Some evidence exists to indicate that feedback from multiple sources is valued by graduates. Further research in a larger sample and with more experienced nurses is required. Evidence resulting from this study indicates that multisource feedback is valued by both graduates and educators and informs graduates' development and transition into the role. Thus, a multisource approach to feedback for graduate nurses should be considered. © 2016 John Wiley & Sons Ltd.
Multi-source remotely sensed data fusion for improving land cover classification
NASA Astrophysics Data System (ADS)
Chen, Bin; Huang, Bo; Xu, Bing
2017-02-01
Although many advances have been made in past decades, land cover classification of fine-resolution remotely sensed (RS) data integrating multiple temporal, angular, and spectral features remains limited, and the contribution of different RS features to land cover classification accuracy remains uncertain. We proposed to improve land cover classification accuracy by integrating multi-source RS features through data fusion. We further investigated the effect of different RS features on classification performance. The results of fusing Landsat-8 Operational Land Imager (OLI) data with Moderate Resolution Imaging Spectroradiometer (MODIS), China Environment 1A series (HJ-1A), and Advanced Spaceborne Thermal Emission and Reflection (ASTER) digital elevation model (DEM) data, showed that the fused data integrating temporal, spectral, angular, and topographic features achieved better land cover classification accuracy than the original RS data. Compared with the topographic feature, the temporal and angular features extracted from the fused data played more important roles in classification performance, especially those temporal features containing abundant vegetation growth information, which markedly increased the overall classification accuracy. In addition, the multispectral and hyperspectral fusion successfully discriminated detailed forest types. Our study provides a straightforward strategy for hierarchical land cover classification by making full use of available RS data. All of these methods and findings could be useful for land cover classification at both regional and global scales.
NASA Astrophysics Data System (ADS)
Gao, M.; Huang, S. T.; Wang, P.; Zhao, Y. A.; Wang, H. B.
2016-11-01
The geological disposal of high-level radioactive waste (hereinafter referred to "geological disposal") is a long-term, complex, and systematic scientific project, whose data and information resources in the research and development ((hereinafter referred to ”R&D”) process provide the significant support for R&D of geological disposal system, and lay a foundation for the long-term stability and safety assessment of repository site. However, the data related to the research and engineering in the sitting of the geological disposal repositories is more complicated (including multi-source, multi-dimension and changeable), the requirements for the data accuracy and comprehensive application has become much higher than before, which lead to the fact that the data model design of geo-information database for the disposal repository are facing more serious challenges. In the essay, data resources of the pre-selected areas of the repository has been comprehensive controlled and systematic analyzed. According to deeply understanding of the application requirements, the research work has made a solution for the key technical problems including reasonable classification system of multi-source data entity, complex logic relations and effective physical storage structures. The new solution has broken through data classification and conventional spatial data the organization model applied in the traditional industry, realized the data organization and integration with the unit of data entities and spatial relationship, which were independent, holonomic and with application significant features in HLW geological disposal. The reasonable, feasible and flexible data conceptual models, logical models and physical models have been established so as to ensure the effective integration and facilitate application development of multi-source data in pre-selected areas for geological disposal.
ERIC Educational Resources Information Center
Liao, Hui; Chuang, Aichia
2007-01-01
This longitudinal field study integrates the theories of transformational leadership (TFL) and relationship marketing to examine how TFL influences employee service performance and customer relationship outcomes by transforming both (at the micro level) the service employees' attitudes and (at the macro level) the work unit's service climate.…
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation. PMID:25405760
NASA Astrophysics Data System (ADS)
Yongzhi, WANG; hui, WANG; Lixia, LIAO; Dongsen, LI
2017-02-01
In order to analyse the geological characteristics of salt rock and stability of salt caverns, rough three-dimensional (3D) models of salt rock stratum and the 3D models of salt caverns on study areas are built by 3D GIS spatial modeling technique. During implementing, multi-source data, such as basic geographic data, DEM, geological plane map, geological section map, engineering geological data, and sonar data are used. In this study, the 3D spatial analyzing and calculation methods, such as 3D GIS intersection detection method in three-dimensional space, Boolean operations between three-dimensional space entities, three-dimensional space grid discretization, are used to build 3D models on wall rock of salt caverns. Our methods can provide effective calculation models for numerical simulation and analysis of the creep characteristics of wall rock in salt caverns.
ERIC Educational Resources Information Center
Goldring, Ellen B.; Mavrogordato, Madeline; Haynes, Katherine Taylor
2015-01-01
Purpose: A relatively new approach to principal evaluation is the use of multisource feedback, which typically entails a leader's self-evaluation as well as parallel evaluations from subordinates, peers, and/or superiors. However, there is little research on how principals interact with evaluation data from multisource feedback systems. This…
Multi-Source Learning for Joint Analysis of Incomplete Multi-Modality Neuroimaging Data
Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping
2013-01-01
Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. We address this problem by proposing two novel learning methods where all the samples (with at least one available data source) can be used. In the first method, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. Our second method learns a base classifier for each data source independently, based on which we represent each source using a single column of prediction scores; we then estimate the missing prediction scores, which, combined with the existing prediction scores, are used to build a multi-source fusion model. To illustrate the proposed approaches, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 Normal), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithms. Comprehensive experiments show that our proposed methods yield stable and promising results. PMID:24014189
Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping
2012-01-01
Analysis of incomplete data is a big challenge when integrating large-scale brain imaging datasets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. In this paper, we address this problem by proposing an incomplete Multi-Source Feature (iMSF) learning method where all the samples (with at least one available data source) can be used. To illustrate the proposed approach, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 NC), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithm. Depending on the problem being solved, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. To build a practical and robust system, we construct a classifier ensemble by combining our method with four other methods for missing value estimation. Comprehensive experiments with various parameters show that our proposed iMSF method and the ensemble model yield stable and promising results. PMID:22498655
Multi-source energy harvester to power sensing hardware on rotating structures
NASA Astrophysics Data System (ADS)
Schlichting, Alexander; Ouellette, Scott; Carlson, Clinton; Farinholt, Kevin M.; Park, Gyuhae; Farrar, Charles R.
2010-04-01
The U.S. Department of Energy (DOE) proposes to meet 20% of the nation's energy needs through wind power by the year 2030. To accomplish this goal, the industry will need to produce larger (>100m diameter) turbines to increase efficiency and maximize energy production. It will be imperative to instrument the large composite structures with onboard sensing to provide structural health monitoring capabilities to understand the global response and integrity of these systems as they age. A critical component in the deployment of such a system will be a robust power source that can operate for the lifespan of the wind turbine. In this paper we consider the use of discrete, localized power sources that derive energy from the ambient (solar, thermal) or operational (kinetic) environment. This approach will rely on a multi-source configuration that scavenges energy from photovoltaic and piezoelectric transducers. Each harvester is first characterized individually in the laboratory and then they are combined through a multi-source power conditioner that is designed to combine the output of each harvester in series to power a small wireless sensor node that has active-sensing capabilities. The advantages/disadvantages of each approach are discussed, along with the proposed design for a field ready energy harvester that will be deployed on a small-scale 19.8m diameter wind turbine.
Latent Profiles of Parental Self-Efficacy and Children's Multisource-Evaluated Social Competence
ERIC Educational Resources Information Center
Junttila, Niina; Vauras, Marja
2014-01-01
Background: The interrelation between mothers' parental self-efficacy (PSE) and their school-aged children's well-being has been repeatedly proved. The lack of research in this area situates mainly on the absence of fathers, non-existent family-level studies, the paucity of independent evaluators, and the use of global PSE estimates.…
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M
2017-02-01
Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.
Objected-oriented remote sensing image classification method based on geographic ontology model
NASA Astrophysics Data System (ADS)
Chu, Z.; Liu, Z. J.; Gu, H. Y.
2016-11-01
Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application of the knowledge to the field of remote sensing image classification, which provides an effective way for objected-oriented remote sensing image classification in the future.
NASA Technical Reports Server (NTRS)
Benediktsson, Jon A.; Swain, Philip H.; Ersoy, Okan K.
1990-01-01
Neural network learning procedures and statistical classificaiton methods are applied and compared empirically in classification of multisource remote sensing and geographic data. Statistical multisource classification by means of a method based on Bayesian classification theory is also investigated and modified. The modifications permit control of the influence of the data sources involved in the classification process. Reliability measures are introduced to rank the quality of the data sources. The data sources are then weighted according to these rankings in the statistical multisource classification. Four data sources are used in experiments: Landsat MSS data and three forms of topographic data (elevation, slope, and aspect). Experimental results show that two different approaches have unique advantages and disadvantages in this classification application.
A multi-source data assimilation framework for flood forecasting: Accounting for runoff routing lags
NASA Astrophysics Data System (ADS)
Meng, S.; Xie, X.
2015-12-01
In the flood forecasting practice, model performance is usually degraded due to various sources of uncertainties, including the uncertainties from input data, model parameters, model structures and output observations. Data assimilation is a useful methodology to reduce uncertainties in flood forecasting. For the short-term flood forecasting, an accurate estimation of initial soil moisture condition will improve the forecasting performance. Considering the time delay of runoff routing is another important effect for the forecasting performance. Moreover, the observation data of hydrological variables (including ground observations and satellite observations) are becoming easily available. The reliability of the short-term flood forecasting could be improved by assimilating multi-source data. The objective of this study is to develop a multi-source data assimilation framework for real-time flood forecasting. In this data assimilation framework, the first step is assimilating the up-layer soil moisture observations to update model state and generated runoff based on the ensemble Kalman filter (EnKF) method, and the second step is assimilating discharge observations to update model state and runoff within a fixed time window based on the ensemble Kalman smoother (EnKS) method. This smoothing technique is adopted to account for the runoff routing lag. Using such assimilation framework of the soil moisture and discharge observations is expected to improve the flood forecasting. In order to distinguish the effectiveness of this dual-step assimilation framework, we designed a dual-EnKF algorithm in which the observed soil moisture and discharge are assimilated separately without accounting for the runoff routing lag. The results show that the multi-source data assimilation framework can effectively improve flood forecasting, especially when the runoff routing has a distinct time lag. Thus, this new data assimilation framework holds a great potential in operational flood forecasting by merging observations from ground measurement and remote sensing retrivals.
Wei Wu; James Clark; James Vose
2010-01-01
Hierarchical Bayesian (HB) modeling allows for multiple sources of uncertainty by factoring complex relationships into conditional distributions that can be used to draw inference and make predictions. We applied an HB model to estimate the parameters and state variables of a parsimonious hydrological model â GR4J â by coherently assimilating the uncertainties from the...
Senay, Gabriel B.; Velpuri, Naga Manohar; Alemu, Henok; Pervez, Shahriar Md; Asante, Kwabena O; Karuki, Gatarwa; Taa, Asefa; Angerer, Jay
2013-01-01
Timely information on the availability of water and forage is important for the sustainable development of pastoral regions. The lack of such information increases the dependence of pastoral communities on perennial sources, which often leads to competition and conflicts. The provision of timely information is a challenging task, especially due to the scarcity or non-existence of conventional station-based hydrometeorological networks in the remote pastoral regions. A multi-source water balance modelling approach driven by satellite data was used to operationally monitor daily water level fluctuations across the pastoral regions of northern Kenya and southern Ethiopia. Advanced Spaceborne Thermal Emission and Reflection Radiometer data were used for mapping and estimating the surface area of the waterholes. Satellite-based rainfall, modelled run-off and evapotranspiration data were used to model daily water level fluctuations. Mapping of waterholes was achieved with 97% accuracy. Validation of modelled water levels with field-installed gauge data demonstrated the ability of the model to capture the seasonal patterns and variations. Validation results indicate that the model explained 60% of the observed variability in water levels, with an average root-mean-squared error of 22%. Up-to-date information on rainfall, evaporation, scaled water depth and condition of the waterholes is made available daily in near-real time via the Internet (http://watermon.tamu.edu). Such information can be used by non-governmental organizations, governmental organizations and other stakeholders for early warning and decision making. This study demonstrated an integrated approach for establishing an operational waterhole monitoring system using multi-source satellite data and hydrologic modelling.
New geomorphic data on the active Taiwan orogen: A multisource approach
NASA Technical Reports Server (NTRS)
Deffontaines, B.; Lee, J.-C.; Angelier, J.; Carvalho, J.; Rudant, J.-P.
1994-01-01
A multisource and multiscale approach of Taiwan morphotectonics combines different complementary geomorphic analyses based on a new elevation model (DEM), side-looking airborne radar (SLAR), and satellite (SPOT) imagery, aerial photographs, and control from independent field data. This analysis enables us not only to present an integrated geomorphic description of the Taiwan orogen but also to highlight some new geodynamic aspects. Well-known, major geological structures such as the Longitudinal Valley, Lishan, Pingtung, and the Foothills fault zones are of course clearly recognized, but numerous, previously unrecognized structures appear distributed within different regions of Taiwan. For instance, transfer fault zones within the Western Foothills and the Central Range are identified based on analyses of lineaments and general morphology. In many cases, the existence of geomorphic features identified in general images is supported by the results of geological field analyses carried out independently. In turn, the field analyses of structures and mechanisms at some sites provide a key for interpreting similar geomorphic featues in other areas. Examples are the conjugate pattern of strike-slip faults within the Central Range and the oblique fold-and-thrust pattern of the Coastal Range. Furthermore, neotectonic and morphological analyses (drainage and erosional surfaces) has been combined in order to obtain a more comprehensive description and interpretation of neotectonic features in Taiwan, such as for the Longitudinal Valley Fault. Next, at a more general scale, numerical processing of digital elevation models, resulting in average topography, summit level or base level maps, allows identification of major features related to the dynamics of uplift and erosion and estimates of erosion balance. Finally, a preliminary morphotectonic sketch map of Taiwan, combining information from all the sources listed above, is presented.
Ontology driven integration platform for clinical and translational research
Mirhaji, Parsa; Zhu, Min; Vagnoni, Mattew; Bernstam, Elmer V; Zhang, Jiajie; Smith, Jack W
2009-01-01
Semantic Web technologies offer a promising framework for integration of disparate biomedical data. In this paper we present the semantic information integration platform under development at the Center for Clinical and Translational Sciences (CCTS) at the University of Texas Health Science Center at Houston (UTHSC-H) as part of our Clinical and Translational Science Award (CTSA) program. We utilize the Semantic Web technologies not only for integrating, repurposing and classification of multi-source clinical data, but also to construct a distributed environment for information sharing, and collaboration online. Service Oriented Architecture (SOA) is used to modularize and distribute reusable services in a dynamic and distributed environment. Components of the semantic solution and its overall architecture are described. PMID:19208190
NASA Astrophysics Data System (ADS)
Yu, Le; Zhang, Dengrong; Holden, Eun-Jung
2008-07-01
Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.
Imputation for multisource data with comparison and assessment techniques
Casleton, Emily Michele; Osthus, David Allen; Van Buren, Kendra Lu
2017-12-27
Missing data are prevalent issue in analyses involving data collection. The problem of missing data is exacerbated for multisource analysis, where data from multiple sensors are combined to arrive at a single conclusion. In this scenario, it is more likely to occur and can lead to discarding a large amount of data collected; however, the information from observed sensors can be leveraged to estimate those values not observed. We propose two methods for imputation of multisource data, both of which take advantage of potential correlation between data from different sensors, through ridge regression and a state-space model. These methods, asmore » well as the common median imputation, are applied to data collected from a variety of sensors monitoring an experimental facility. Performance of imputation methods is compared with the mean absolute deviation; however, rather than using this metric to solely rank themethods,we also propose an approach to identify significant differences. Imputation techniqueswill also be assessed by their ability to produce appropriate confidence intervals, through coverage and length, around the imputed values. Finally, performance of imputed datasets is compared with a marginalized dataset through a weighted k-means clustering. In general, we found that imputation through a dynamic linearmodel tended to be the most accurate and to produce the most precise confidence intervals, and that imputing the missing values and down weighting them with respect to observed values in the analysis led to the most accurate performance.« less
Imputation for multisource data with comparison and assessment techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casleton, Emily Michele; Osthus, David Allen; Van Buren, Kendra Lu
Missing data are prevalent issue in analyses involving data collection. The problem of missing data is exacerbated for multisource analysis, where data from multiple sensors are combined to arrive at a single conclusion. In this scenario, it is more likely to occur and can lead to discarding a large amount of data collected; however, the information from observed sensors can be leveraged to estimate those values not observed. We propose two methods for imputation of multisource data, both of which take advantage of potential correlation between data from different sensors, through ridge regression and a state-space model. These methods, asmore » well as the common median imputation, are applied to data collected from a variety of sensors monitoring an experimental facility. Performance of imputation methods is compared with the mean absolute deviation; however, rather than using this metric to solely rank themethods,we also propose an approach to identify significant differences. Imputation techniqueswill also be assessed by their ability to produce appropriate confidence intervals, through coverage and length, around the imputed values. Finally, performance of imputed datasets is compared with a marginalized dataset through a weighted k-means clustering. In general, we found that imputation through a dynamic linearmodel tended to be the most accurate and to produce the most precise confidence intervals, and that imputing the missing values and down weighting them with respect to observed values in the analysis led to the most accurate performance.« less
Land Cover Monitoring for Water Resources Management in Angola
NASA Astrophysics Data System (ADS)
Miguel, Irina; Navarro, Ana; Rolim, Joao; Catalao, Joao; Silva, Joel; Painho, Marco; Vekerdy, Zoltan
2016-08-01
The aim of this paper is to assess the impact of improved temporal resolution and multi-source satellite data (SAR and optical) on land cover mapping and monitoring for efficient water resources management. For that purpose, we developed an integrated approach based on image classification and on NDVI and SAR backscattering (VV and VH) time series for land cover mapping and crop's irrigation requirements computation. We analysed 28 SPOT-5 Take-5 images with high temporal revisiting time (5 days), 9 Sentinel-1 dual polarization GRD images and in-situ data acquired during the crop growing season. Results show that the combination of images from different sources provides the best information to map agricultural areas. The increase of the images temporal resolution allows the improvement of the estimation of the crop parameters, and then, to calculate of the crop's irrigation requirements. However, this aspect was not fully exploited due to the lack of EO data for the complete growing season.
2013-12-14
population covariance matrix with application to array signal processing; and 5) a sample covariance matrix for which a CLT is studied on linear...Applications , (01 2012): 1150004. doi: Walid Hachem, Malika Kharouf, Jamal Najim, Jack W. Silverstein. A CLT FOR INFORMATION- THEORETIC STATISTICS...for Multi-source Power Estimation, (04 2010) Malika Kharouf, Jamal Najim, Jack W. Silverstein, Walid Hachem. A CLT FOR INFORMATION- THEORETIC
Binaural segregation in multisource reverberant environments.
Roman, Nicoleta; Srinivasan, Soundararajan; Wang, DeLiang
2006-12-01
In a natural environment, speech signals are degraded by both reverberation and concurrent noise sources. While human listening is robust under these conditions using only two ears, current two-microphone algorithms perform poorly. The psychological process of figure-ground segregation suggests that the target signal is perceived as a foreground while the remaining stimuli are perceived as a background. Accordingly, the goal is to estimate an ideal time-frequency (T-F) binary mask, which selects the target if it is stronger than the interference in a local T-F unit. In this paper, a binaural segregation system that extracts the reverberant target signal from multisource reverberant mixtures by utilizing only the location information of target source is proposed. The proposed system combines target cancellation through adaptive filtering and a binary decision rule to estimate the ideal T-F binary mask. The main observation in this work is that the target attenuation in a T-F unit resulting from adaptive filtering is correlated with the relative strength of target to mixture. A comprehensive evaluation shows that the proposed system results in large SNR gains. In addition, comparisons using SNR as well as automatic speech recognition measures show that this system outperforms standard two-microphone beamforming approaches and a recent binaural processor.
Dang, Yaoguo; Mao, Wenxin
2018-01-01
In view of the multi-attribute decision-making problem that the attribute values are grey multi-source heterogeneous data, a decision-making method based on kernel and greyness degree is proposed. The definitions of kernel and greyness degree of an extended grey number in a grey multi-source heterogeneous data sequence are given. On this basis, we construct the kernel vector and greyness degree vector of the sequence to whiten the multi-source heterogeneous information, then a grey relational bi-directional projection ranking method is presented. Considering the multi-attribute multi-level decision structure and the causalities between attributes in decision-making problem, the HG-DEMATEL method is proposed to determine the hierarchical attribute weights. A green supplier selection example is provided to demonstrate the rationality and validity of the proposed method. PMID:29510521
Sun, Huifang; Dang, Yaoguo; Mao, Wenxin
2018-03-03
In view of the multi-attribute decision-making problem that the attribute values are grey multi-source heterogeneous data, a decision-making method based on kernel and greyness degree is proposed. The definitions of kernel and greyness degree of an extended grey number in a grey multi-source heterogeneous data sequence are given. On this basis, we construct the kernel vector and greyness degree vector of the sequence to whiten the multi-source heterogeneous information, then a grey relational bi-directional projection ranking method is presented. Considering the multi-attribute multi-level decision structure and the causalities between attributes in decision-making problem, the HG-DEMATEL method is proposed to determine the hierarchical attribute weights. A green supplier selection example is provided to demonstrate the rationality and validity of the proposed method.
Malling, Bente; Mortensen, Lene; Bonderup, Thomas; Scherpbier, Albert; Ringsted, Charlotte
2009-12-10
Leadership courses and multi-source feedback are widely used developmental tools for leaders in health care. On this background we aimed to study the additional effect of a leadership course following a multi-source feedback procedure compared to multi-source feedback alone especially regarding development of leadership skills over time. Study participants were consultants responsible for postgraduate medical education at clinical departments. pre-post measures with an intervention and control group. The intervention was participation in a seven-day leadership course. Scores of multi-source feedback from the consultants responsible for education and respondents (heads of department, consultants and doctors in specialist training) were collected before and one year after the intervention and analysed using Mann-Whitney's U-test and Multivariate analysis of variances. There were no differences in multi-source feedback scores at one year follow up compared to baseline measurements, either in the intervention or in the control group (p = 0.149). The study indicates that a leadership course following a MSF procedure compared to MSF alone does not improve leadership skills of consultants responsible for education in clinical departments. Developing leadership skills takes time and the time frame of one year might have been too short to show improvement in leadership skills of consultants responsible for education. Further studies are needed to investigate if other combination of initiatives to develop leadership might have more impact in the clinical setting.
Field Trials of the Multi-Source Approach for Resistivity and Induced Polarization Data Acquisition
NASA Astrophysics Data System (ADS)
LaBrecque, D. J.; Morelli, G.; Fischanger, F.; Lamoureux, P.; Brigham, R.
2013-12-01
Implementing systems of distributed receivers and transmitters for resistivity and induced polarization data is an almost inevitable result of the availability of wireless data communication modules and GPS modules offering precise timing and instrument locations. Such systems have a number of advantages; for example, they can be deployed around obstacles such as rivers, canyons, or mountains which would be difficult with traditional 'hard-wired' systems. However, deploying a system of identical, small, battery powered, transceivers, each capable of injecting a known current and measuring the induced potential has an additional and less obvious advantage in that multiple units can inject current simultaneously. The original purpose for using multiple simultaneous current sources (multi-source) was to increase signal levels. In traditional systems, to double the received signal you inject twice the current which requires you to apply twice the voltage and thus four times the power. Alternatively, one approach to increasing signal levels for large-scale surveys collected using small, battery powered transceivers is it to allow multiple units to transmit in parallel. In theory, using four 400 watt transmitters on separate, parallel dipoles yields roughly the same signal as a single 6400 watt transmitter. Furthermore, implementing the multi-source approach creates the opportunity to apply more complex current flow patterns than simple, parallel dipoles. For a perfect, noise-free system, multi-sources adds no new information to a data set that contains a comprehensive set of data collected using single sources. However, for realistic, noisy systems, it appears that multi-source data can substantially impact survey results. In preliminary model studies, the multi-source data produced such startling improvements in subsurface images that even the authors questioned their veracity. Between December of 2012 and July of 2013, we completed multi-source surveys at five sites with depths of exploration ranging from 150 to 450 m. The sites included shallow geothermal sites near Reno Nevada, Pomarance Italy, and Volterra Italy; a mineral exploration site near Timmins Quebec; and a landslide investigation near Vajont Dam in northern Italy. These sites provided a series of challenges in survey design and deployment including some extremely difficult terrain and a broad range of background resistivity and induced values. Despite these challenges, comparison of multi-source results to resistivity and induced polarization data collection with more traditional methods support the thesis that the multi-source approach is capable of providing substantial improvements in both depth of penetration and resolution over conventional approaches.
NASA Astrophysics Data System (ADS)
Ma, Y.; Liu, S.
2017-12-01
Accurate estimation of surface evapotranspiration (ET) with high quality is one of the biggest obstacles for routine applications of remote sensing in eco-hydrological studies and water resource management at basin scale. However, many aspects urgently need to deeply research, such as the applicability of the ET models, the parameterization schemes optimization at the regional scale, the temporal upscaling, the selecting and developing of the spatiotemporal data fusion method and ground-based validation over heterogeneous land surfaces. This project is based on the theoretically robust surface energy balance system (SEBS) model, which the model mechanism need further investigation, including the applicability and the influencing factors, such as local environment, and heterogeneity of the landscape, for improving estimation accuracy. Due to technical and budget limitations, so far, optical remote sensing data is missing due to frequent cloud contamination and other poor atmospheric conditions in Southwest China. Here, a multi-source remote sensing data fusion method (ESTARFM: Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model) method will be proposed through blending multi-source remote sensing data acquired by optical, and passive microwave remote sensors on board polar satellite platforms. The accurate "all-weather" ET estimation will be carried out for daily ET of the River Source Region in Southwest China, and then the remotely sensed ET results are overlapped with the footprint-weighted images of EC (eddy correlation) for ground-based validation.
NASA Astrophysics Data System (ADS)
Velpuri, N. M.; Senay, G. B.; Rowland, J.; Budde, M. E.; Verdin, J. P.
2015-12-01
Continental Africa has the largest volume of water stored in wetlands, large lakes, reservoirs and rivers, yet it suffers with problems such as water availability and access. Furthermore, African countries are amongst the most vulnerable to the impact of natural hazards such as droughts and floods. With climate change intensifying the hydrologic cycle and altering the distribution and frequency of rainfall, the problem of water availability and access is bound to increase. The U.S Geological Survey Famine Early Warning Systems Network (FEWS NET), funded by the U.S. Agency for International Development, has initiated a large-scale project to monitor small to medium surface water bodies in Africa. Under this project, multi-source satellite data and hydrologic modeling techniques are integrated to monitor these water bodies in Africa. First, small water bodies are mapped using satellite data such as Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Landsat, and high resolution Google Earth imagery. Stream networks and watersheds for each water body are identified using Shuttle Radar Topography Mission (SRTM) digital elevation data. Finally, a hydrologic modeling approach that uses satellite-derived precipitation estimates and evapotranspiration data calculated from global data assimilation system climate parameters is applied to model water levels. This approach has been implemented to monitor nearly 300 small water bodies located in 10 countries in sub-Saharan Africa. Validation of modeled scaled depths with field-installed gauge data in East Africa demonstrated the ability of the model to capture both the spatial patterns and seasonal variations. Modeled scaled estimates captured up to 60% of the observed gauge variability with an average RMSE of 22%. Current and historic data (since 2001) on relative water level, precipitation, and evapotranspiration for each water body is made available in near real time. The water point monitoring network will be further expanded to cover other pastoral regions of sub-Saharan Africa. This project provides timely information on water availability that supports FEWS NET monitoring activities in Africa. Information on water availability produced in this study would further increase the resilience of local communities to floods and droughts.
NASA Technical Reports Server (NTRS)
Benediktsson, J. A.; Swain, P. H.; Ersoy, O. K.
1993-01-01
Application of neural networks to classification of remote sensing data is discussed. Conventional two-layer backpropagation is found to give good results in classification of remote sensing data but is not efficient in training. A more efficient variant, based on conjugate-gradient optimization, is used for classification of multisource remote sensing and geographic data and very-high-dimensional data. The conjugate-gradient neural networks give excellent performance in classification of multisource data, but do not compare as well with statistical methods in classification of very-high-dimentional data.
Ren, Yin; Deng, Lu-Ying; Zuo, Shu-Di; Song, Xiao-Dong; Liao, Yi-Lan; Xu, Cheng-Dong; Chen, Qi; Hua, Li-Zhong; Li, Zheng-Wei
2016-09-01
Identifying factors that influence the land surface temperature (LST) of urban forests can help improve simulations and predictions of spatial patterns of urban cool islands. This requires a quantitative analytical method that combines spatial statistical analysis with multi-source observational data. The purpose of this study was to reveal how human activities and ecological factors jointly influence LST in clustering regions (hot or cool spots) of urban forests. Using Xiamen City, China from 1996 to 2006 as a case study, we explored the interactions between human activities and ecological factors, as well as their influences on urban forest LST. Population density was selected as a proxy for human activity. We integrated multi-source data (forest inventory, digital elevation models (DEM), population, and remote sensing imagery) to develop a database on a unified urban scale. The driving mechanism of urban forest LST was revealed through a combination of multi-source spatial data and spatial statistical analysis of clustering regions. The results showed that the main factors contributing to urban forest LST were dominant tree species and elevation. The interactions between human activity and specific ecological factors linearly or nonlinearly increased LST in urban forests. Strong interactions between elevation and dominant species were generally observed and were prevalent in either hot or cold spots areas in different years. In conclusion, quantitative studies based on spatial statistics and GeogDetector models should be conducted in urban areas to reveal interactions between human activities, ecological factors, and LST. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
LIU, Yiping; XU, Qing; ZhANG, Heng; LV, Liang; LU, Wanjie; WANG, Dandi
2016-11-01
The purpose of this paper is to solve the problems of the traditional single system for interpretation and draughting such as inconsistent standards, single function, dependence on plug-ins, closed system and low integration level. On the basis of the comprehensive analysis of the target elements composition, map representation and similar system features, a 3D interpretation and draughting integrated service platform for multi-source, multi-scale and multi-resolution geospatial objects is established based on HTML5 and WebGL, which not only integrates object recognition, access, retrieval, three-dimensional display and test evaluation but also achieves collection, transfer, storage, refreshing and maintenance of data about Geospatial Objects and shows value in certain prospects and potential for growth.
2009-01-01
Background Leadership courses and multi-source feedback are widely used developmental tools for leaders in health care. On this background we aimed to study the additional effect of a leadership course following a multi-source feedback procedure compared to multi-source feedback alone especially regarding development of leadership skills over time. Methods Study participants were consultants responsible for postgraduate medical education at clinical departments. Study design: pre-post measures with an intervention and control group. The intervention was participation in a seven-day leadership course. Scores of multi-source feedback from the consultants responsible for education and respondents (heads of department, consultants and doctors in specialist training) were collected before and one year after the intervention and analysed using Mann-Whitney's U-test and Multivariate analysis of variances. Results There were no differences in multi-source feedback scores at one year follow up compared to baseline measurements, either in the intervention or in the control group (p = 0.149). Conclusion The study indicates that a leadership course following a MSF procedure compared to MSF alone does not improve leadership skills of consultants responsible for education in clinical departments. Developing leadership skills takes time and the time frame of one year might have been too short to show improvement in leadership skills of consultants responsible for education. Further studies are needed to investigate if other combination of initiatives to develop leadership might have more impact in the clinical setting. PMID:20003311
Husain, Syed S; Kalinin, Alexandr; Truong, Anh; Dinov, Ivo D
Intuitive formulation of informative and computationally-efficient queries on big and complex datasets present a number of challenges. As data collection is increasingly streamlined and ubiquitous, data exploration, discovery and analytics get considerably harder. Exploratory querying of heterogeneous and multi-source information is both difficult and necessary to advance our knowledge about the world around us. We developed a mechanism to integrate dispersed multi-source data and service the mashed information via human and machine interfaces in a secure, scalable manner. This process facilitates the exploration of subtle associations between variables, population strata, or clusters of data elements, which may be opaque to standard independent inspection of the individual sources. This a new platform includes a device agnostic tool (Dashboard webapp, http://socr.umich.edu/HTML5/Dashboard/) for graphical querying, navigating and exploring the multivariate associations in complex heterogeneous datasets. The paper illustrates this core functionality and serviceoriented infrastructure using healthcare data (e.g., US data from the 2010 Census, Demographic and Economic surveys, Bureau of Labor Statistics, and Center for Medicare Services) as well as Parkinson's Disease neuroimaging data. Both the back-end data archive and the front-end dashboard interfaces are continuously expanded to include additional data elements and new ways to customize the human and machine interactions. A client-side data import utility allows for easy and intuitive integration of user-supplied datasets. This completely open-science framework may be used for exploratory analytics, confirmatory analyses, meta-analyses, and education and training purposes in a wide variety of fields.
Multisource inverse-geometry CT. Part I. System concept and development
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Man, Bruno, E-mail: deman@ge.com; Harrison, Dan
Purpose: This paper presents an overview of multisource inverse-geometry computed tomography (IGCT) as well as the development of a gantry-based research prototype system. The development of the distributed x-ray source is covered in a companion paper [V. B. Neculaes et al., “Multisource inverse-geometry CT. Part II. X-ray source design and prototype,” Med. Phys. 43, 4617–4627 (2016)]. While progress updates of this development have been presented at conferences and in journal papers, this paper is the first comprehensive overview of the multisource inverse-geometry CT concept and prototype. The authors also provide a review of all previous IGCT related publications. Methods: Themore » authors designed and implemented a gantry-based 32-source IGCT scanner with 22 cm field-of-view, 16 cm z-coverage, 1 s rotation time, 1.09 × 1.024 mm detector cell size, as low as 0.4 × 0.8 mm focal spot size and 80–140 kVp x-ray source voltage. The system is built using commercially available CT components and a custom made distributed x-ray source. The authors developed dedicated controls, calibrations, and reconstruction algorithms and evaluated the system performance using phantoms and small animals. Results: The authors performed IGCT system experiments and demonstrated tube current up to 125 mA with up to 32 focal spots. The authors measured a spatial resolution of 13 lp/cm at 5% cutoff. The scatter-to-primary ratio is estimated 62% for a 32 cm water phantom at 140 kVp. The authors scanned several phantoms and small animals. The initial images have relatively high noise due to the low x-ray flux levels but minimal artifacts. Conclusions: IGCT has unique benefits in terms of dose-efficiency and cone-beam artifacts, but comes with challenges in terms of scattered radiation and x-ray flux limits. To the authors’ knowledge, their prototype is the first gantry-based IGCT scanner. The authors summarized the design and implementation of the scanner and the authors presented results with phantoms and small animals.« less
Multisource inverse-geometry CT. Part I. System concept and development
De Man, Bruno; Uribe, Jorge; Baek, Jongduk; Harrison, Dan; Yin, Zhye; Longtin, Randy; Roy, Jaydeep; Waters, Bill; Wilson, Colin; Short, Jonathan; Inzinna, Lou; Reynolds, Joseph; Neculaes, V. Bogdan; Frutschy, Kristopher; Senzig, Bob; Pelc, Norbert
2016-01-01
Purpose: This paper presents an overview of multisource inverse-geometry computed tomography (IGCT) as well as the development of a gantry-based research prototype system. The development of the distributed x-ray source is covered in a companion paper [V. B. Neculaes et al., “Multisource inverse-geometry CT. Part II. X-ray source design and prototype,” Med. Phys. 43, 4617–4627 (2016)]. While progress updates of this development have been presented at conferences and in journal papers, this paper is the first comprehensive overview of the multisource inverse-geometry CT concept and prototype. The authors also provide a review of all previous IGCT related publications. Methods: The authors designed and implemented a gantry-based 32-source IGCT scanner with 22 cm field-of-view, 16 cm z-coverage, 1 s rotation time, 1.09 × 1.024 mm detector cell size, as low as 0.4 × 0.8 mm focal spot size and 80–140 kVp x-ray source voltage. The system is built using commercially available CT components and a custom made distributed x-ray source. The authors developed dedicated controls, calibrations, and reconstruction algorithms and evaluated the system performance using phantoms and small animals. Results: The authors performed IGCT system experiments and demonstrated tube current up to 125 mA with up to 32 focal spots. The authors measured a spatial resolution of 13 lp/cm at 5% cutoff. The scatter-to-primary ratio is estimated 62% for a 32 cm water phantom at 140 kVp. The authors scanned several phantoms and small animals. The initial images have relatively high noise due to the low x-ray flux levels but minimal artifacts. Conclusions: IGCT has unique benefits in terms of dose-efficiency and cone-beam artifacts, but comes with challenges in terms of scattered radiation and x-ray flux limits. To the authors’ knowledge, their prototype is the first gantry-based IGCT scanner. The authors summarized the design and implementation of the scanner and the authors presented results with phantoms and small animals. PMID:27487877
Enhanced gamma-ray emission from the microquasar Cygnus X-3 detected by AGILE
NASA Astrophysics Data System (ADS)
Piano, G.; Pittori, C.; Verrecchia, F.; Tavani, M.; Bulgarelli, A.; Fioretti, V.; Zoli, A.; Munar-Adrover, P.; Lucarelli, F.; Donnarumma, I.; Vercellone, S.; Striani, E.; Cardillo, M.; Gianotti, F.; Trifoglio, M.; Giuliani, A.; Mereghetti, S.; Caraveo, P.; Perotti, F.; Chen, A.; Argan, A.; Costa, E.; Del Monte, E.; Evangelista, Y.; Feroci, M.; Lazzarotto, F.; Lapshov, I.; Pacciani, L.; Soffitta, P.; Sabatini, S.; Vittorini, V.; Pucella, G.; Rapisarda, M.; Di Cocco, G.; Fuschino, F.; Galli, M.; Labanti, C.; Marisaldi, M.; Pellizzoni, A.; Pilia, M.; Trois, A.; Barbiellini, G.; Vallazza, E.; Longo, F.; Morselli, A.; Picozza, P.; Prest, M.; Lipari, P.; Zanello, D.; Cattaneo, P. W.; Rappoldi, A.; Colafrancesco, S.; Parmiggiani, N.; Ferrari, A.; Antonelli, A.; Giommi, P.; Salotti, L.; Valentini, G.; D'Amico, F.
2016-04-01
Integrating from 2016-04-16 00:00 UT to 2016-04-19 00:00 UT, the AGILE-GRID detector is revealing gamma-ray emission above 100 MeV from a source positionally consistent with Cygnus X-3 at Galactic coordinates (l, b) = (79.4, 0.2) +/- 0.6 (stat.) +/- 0.1 (syst.) deg, with flux F( > 100 MeV) = (2.0 +/- 0.8) x 10^-6 photons/cm^2/s, as determined by a multi-source likelihood analysis.
Miranda, Elaine Silva; Pinto, Cláudia Du Bocage Santos; dos Reis, André Luis de Almeida; Emmerick, Isabel Cristina Martins; Campos, Mônica Rodrigues; Luiza, Vera Lucia; Osorio-de-Castro, Claudia Garcia Serpa
2009-10-01
A study to identify availability and prices of medicines, according to type of provider, was conducted in the five regions of Brazil. A list of medicines to treat prevalent diseases was investigated, using the medicines price methodology developed by the World Health Organization and Health Action International, adapted for Brazil. In the public sector, bioequivalent (vis-à-vis reference brand) generics are less available than multisource products. For most medicines (71.4%), the availability of bioequivalent generics was less than 10%. In the private sector, the average number of different bioequivalent generic versions in the outlets was far smaller than the number of versions on the market. There was a positive correlation between the number of generics on the market, or those found at outlets, and the price variation in bioequivalent generic products, in relation to the maximum consumer price. It is estimated that price competition is occurring among bioequivalent generic drugs and between them and multisource products for the same substance, but not with reference brands.
NASA Astrophysics Data System (ADS)
Zhao, Junsan; Chen, Guoping; Yuan, Lei
2017-04-01
The new technologies, such as 3D laser scanning, InSAR, GNSS, unmanned aerial vehicle and Internet of things, will provide much more data resources for the surveying and monitoring, as well as the development of Early Warning System (EWS). This paper provides the solutions of the design and implementation of a geological disaster monitoring and early warning system (GDMEWS), which includes landslides and debris flows hazard, based on the multi-sources of the date by use of technologies above mentioned. The complex and changeable characteristics of the GDMEWS are described. The architecture of the system, composition of the multi-source database, development mode and service logic, the methods and key technologies of system development are also analyzed. To elaborate the process of the implementation of the GDMEWS, Deqin Tibetan County is selected as a case study area, which has the unique terrain and diverse types of typical landslides and debris flows. Firstly, the system functional requirements, monitoring and forecasting models of the system are discussed. Secondly, the logic relationships of the whole process of disaster including pre-disaster, disaster rescue and post-disaster reconstruction are studied, and the support tool for disaster prevention, disaster reduction and geological disaster management are developed. Thirdly, the methods of the multi - source monitoring data integration and the generation of the mechanism model of Geological hazards and simulation are expressed. Finally, the construction of the GDMEWS is issued, which will be applied to management, monitoring and forecasting of whole disaster process in real-time and dynamically in Deqin Tibetan County. Keywords: multi-source spatial data; geological disaster; monitoring and warning system; Deqin Tibetan County
Multimethod-Multisource Approach for Assessing High-Technology Training Systems.
ERIC Educational Resources Information Center
Shlechter, Theodore M.; And Others
This investigation examined the value of using a multimethod-multisource approach to assess high-technology training systems. The research strategy was utilized to provide empirical information on the instructional effectiveness of the Reserve Component Virtual Training Program (RCVTP), which was developed to improve the training of Army National…
Understanding the Influence of Emotions and Reflection upon Multi-Source Feedback Acceptance and Use
ERIC Educational Resources Information Center
Sargeant, Joan; Mann, Karen; Sinclair, Douglas; Van der Vleuten, Cees; Metsemakers, Job
2008-01-01
Introduction: Receiving negative performance feedback can elicit negative emotional reactions which can interfere with feedback acceptance and use. This study investigated emotional responses of family physicians' participating in a multi-source feedback (MSF) program, sources of these emotions, and their influence upon feedback acceptance and…
The MiPACQ Clinical Question Answering System
Cairns, Brian L.; Nielsen, Rodney D.; Masanz, James J.; Martin, James H.; Palmer, Martha S.; Ward, Wayne H.; Savova, Guergana K.
2011-01-01
The Multi-source Integrated Platform for Answering Clinical Questions (MiPACQ) is a QA pipeline that integrates a variety of information retrieval and natural language processing systems into an extensible question answering system. We present the system’s architecture and an evaluation of MiPACQ on a human-annotated evaluation dataset based on the Medpedia health and medical encyclopedia. Compared with our baseline information retrieval system, the MiPACQ rule-based system demonstrates 84% improvement in Precision at One and the MiPACQ machine-learning-based system demonstrates 134% improvement. Other performance metrics including mean reciprocal rank and area under the precision/recall curves also showed significant improvement, validating the effectiveness of the MiPACQ design and implementation. PMID:22195068
Economic dispatch optimization for system integrating renewable energy sources
NASA Astrophysics Data System (ADS)
Jihane, Kartite; Mohamed, Cherkaoui
2018-05-01
Nowadays, the use of energy is growing especially in transportation and electricity industries. However this energy is based on conventional sources which pollute the environment. Multi-source system is seen as the best solution to sustainable development. This paper proposes the Economic Dispatch (ED) of hybrid renewable power system. The hybrid system is composed of ten thermal generators, photovoltaic (PV) generator and wind turbine generator. To show the importance of renewable energy sources (RES) in the energy mix we have ran the simulation for system integrated PV only and PV plus wind. The result shows that the system with renewable energy sources (RES) is more compromising than the system without RES in terms of fuel cost.
The MiPACQ clinical question answering system.
Cairns, Brian L; Nielsen, Rodney D; Masanz, James J; Martin, James H; Palmer, Martha S; Ward, Wayne H; Savova, Guergana K
2011-01-01
The Multi-source Integrated Platform for Answering Clinical Questions (MiPACQ) is a QA pipeline that integrates a variety of information retrieval and natural language processing systems into an extensible question answering system. We present the system's architecture and an evaluation of MiPACQ on a human-annotated evaluation dataset based on the Medpedia health and medical encyclopedia. Compared with our baseline information retrieval system, the MiPACQ rule-based system demonstrates 84% improvement in Precision at One and the MiPACQ machine-learning-based system demonstrates 134% improvement. Other performance metrics including mean reciprocal rank and area under the precision/recall curves also showed significant improvement, validating the effectiveness of the MiPACQ design and implementation.
Terrestrial laser scanning in monitoring of anthropogenic objects
NASA Astrophysics Data System (ADS)
Zaczek-Peplinska, Janina; Kowalska, Maria
2017-12-01
The registered xyz coordinates in the form of a point cloud captured by terrestrial laser scanner and the intensity values (I) assigned to them make it possible to perform geometric and spectral analyses. Comparison of point clouds registered in different time periods requires conversion of the data to a common coordinate system and proper data selection is necessary. Factors like point distribution dependant on the distance between the scanner and the surveyed surface, angle of incidence, tasked scan's density and intensity value have to be taken into consideration. A prerequisite for running a correct analysis of the obtained point clouds registered during periodic measurements using a laser scanner is the ability to determine the quality and accuracy of the analysed data. The article presents a concept of spectral data adjustment based on geometric analysis of a surface as well as examples of geometric analyses integrating geometric and physical data in one cloud of points: cloud point coordinates, recorded intensity values, and thermal images of an object. The experiments described here show multiple possibilities of usage of terrestrial laser scanning data and display the necessity of using multi-aspect and multi-source analyses in anthropogenic object monitoring. The article presents examples of multisource data analyses with regard to Intensity value correction due to the beam's incidence angle. The measurements were performed using a Leica Nova MS50 scanning total station, Z+F Imager 5010 scanner and the integrated Z+F T-Cam thermal camera.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method.
Velpuri, Naga Manohar; Senay, Gabriel B.; Rowland, James; Verdin, James P.; Alemu, Henok; Melesse, Assefa M.; Abtew, Wossenu; Setegn, Shimelis G.
2014-01-01
Continental Africa has the highest volume of water stored in wetlands, large lakes, reservoirs, and rivers, yet it suffers from problems such as water availability and access. With climate change intensifying the hydrologic cycle and altering the distribution and frequency of rainfall, the problem of water availability and access will increase further. Famine Early Warning Systems Network (FEWS NET) funded by the United States Agency for International Development (USAID) has initiated a large-scale project to monitor small to medium surface water points in Africa. Under this project, multisource satellite data and hydrologic modeling techniques are integrated to monitor several hundreds of small to medium surface water points in Africa. This approach has been already tested to operationally monitor 41 water points in East Africa. The validation of modeled scaled depths with field-installed gauge data demonstrated the ability of the model to capture both the spatial patterns and seasonal variations. Modeled scaled estimates captured up to 60 % of the observed gauge variability with a mean root-mean-square error (RMSE) of 22 %. The data on relative water level, precipitation, and evapotranspiration (ETo) for water points in East and West Africa were modeled since 1998 and current information is being made available in near-real time. This chapter presents the approach, results from the East African study, and the first phase of expansion activities in the West Africa region. The water point monitoring network will be further expanded to cover much of sub-Saharan Africa. The goal of this study is to provide timely information on the water availability that would support already established FEWS NET activities in Africa. This chapter also presents the potential improvements in modeling approach to be implemented during future expansion in Africa.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method. PMID:24820966
On the Discovery of Evolving Truth
Li, Yaliang; Li, Qi; Gao, Jing; Su, Lu; Zhao, Bo; Fan, Wei; Han, Jiawei
2015-01-01
In the era of big data, information regarding the same objects can be collected from increasingly more sources. Unfortunately, there usually exist conflicts among the information coming from different sources. To tackle this challenge, truth discovery, i.e., to integrate multi-source noisy information by estimating the reliability of each source, has emerged as a hot topic. In many real world applications, however, the information may come sequentially, and as a consequence, the truth of objects as well as the reliability of sources may be dynamically evolving. Existing truth discovery methods, unfortunately, cannot handle such scenarios. To address this problem, we investigate the temporal relations among both object truths and source reliability, and propose an incremental truth discovery framework that can dynamically update object truths and source weights upon the arrival of new data. Theoretical analysis is provided to show that the proposed method is guaranteed to converge at a fast rate. The experiments on three real world applications and a set of synthetic data demonstrate the advantages of the proposed method over state-of-the-art truth discovery methods. PMID:26705502
Research on precise modeling of buildings based on multi-source data fusion of air to ground
NASA Astrophysics Data System (ADS)
Li, Yongqiang; Niu, Lubiao; Yang, Shasha; Li, Lixue; Zhang, Xitong
2016-03-01
Aims at the accuracy problem of precise modeling of buildings, a test research was conducted based on multi-source data for buildings of the same test area , including top data of air-borne LiDAR, aerial orthophotos, and façade data of vehicle-borne LiDAR. After accurately extracted the top and bottom outlines of building clusters, a series of qualitative and quantitative analysis was carried out for the 2D interval between outlines. Research results provide a reliable accuracy support for precise modeling of buildings of air ground multi-source data fusion, on the same time, discussed some solution for key technical problems.
Harth, Yoram
2015-03-01
In the last decade, Radiofrequency (RF) energy has proven to be safe and highly efficacious for face and neck skin tightening, body contouring, and cellulite reduction. In contrast to first-generation Monopolar/Bipolar and "X -Polar" RF systems which use one RF generator connected to one or more skin electrodes, multisource radiofrequency devices use six independent RF generators allowing efficient dermal heating to 52-55°C, with no pain or risk of other side effects. In this review, the basic science and clinical results of body contouring and cellulite treatment using multisource radiofrequency system (Endymed PRO, Endymed, Cesarea, Israel) will be discussed and analyzed. © 2015 Wiley Periodicals, Inc.
Development of Physical Therapy Practical Assessment System by Using Multisource Feedback
ERIC Educational Resources Information Center
Hengsomboon, Ninwisan; Pasiphol, Shotiga; Sujiva, Siridej
2017-01-01
The purposes of the research were (1) to develop the physical therapy practical assessment system by using the multisource feedback (MSF) approach and (2) to investigate the effectiveness of the implementation of the developed physical therapy practical assessment system. The development of physical therapy practical assessment system by using MSF…
ERIC Educational Resources Information Center
Roberts, Martin J.; Campbell, John L.; Richards, Suzanne H.; Wright, Christine
2013-01-01
Introduction: Multisource feedback (MSF) ratings provided by patients and colleagues are often poorly correlated with doctors' self-assessments. Doctors' reactions to feedback depend on its agreement with their own perceptions, but factors influencing self-other agreement in doctors' MSF ratings have received little attention. We aimed to identify…
Multi-Source Evaluation of Interpersonal and Communication Skills of Family Medicine Residents
ERIC Educational Resources Information Center
Leung, Kai-Kuen; Wang, Wei-Dan; Chen, Yen-Yuan
2012-01-01
There is a lack of information on the use of multi-source evaluation to assess trainees' interpersonal and communication skills in Oriental settings. This study is conducted to assess the reliability and applicability of assessing the interpersonal and communication skills of family medicine residents by patients, peer residents, nurses, and…
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
Li, Hao; Zhang, Gaofei; Ma, Rui; You, Zheng
2014-01-01
An effective multisource energy harvesting system is presented as power supply for wireless sensor nodes (WSNs). The advanced system contains not only an expandable power management module including control of the charging and discharging process of the lithium polymer battery but also an energy harvesting system using the maximum power point tracking (MPPT) circuit with analog driving scheme for the collection of both solar and vibration energy sources. Since the MPPT and the power management module are utilized, the system is able to effectively achieve a low power consumption. Furthermore, a super capacitor is integrated in the system so that current fluctuations of the lithium polymer battery during the charging and discharging processes can be properly reduced. In addition, through a simple analog switch circuit with low power consumption, the proposed system can successfully switch the power supply path according to the ambient energy sources and load power automatically. A practical WSNs platform shows that efficiency of the energy harvesting system can reach about 75-85% through the 24-hour environmental test, which confirms that the proposed system can be used as a long-term continuous power supply for WSNs.
Li, Hao; Zhang, Gaofei; Ma, Rui; You, Zheng
2014-01-01
An effective multisource energy harvesting system is presented as power supply for wireless sensor nodes (WSNs). The advanced system contains not only an expandable power management module including control of the charging and discharging process of the lithium polymer battery but also an energy harvesting system using the maximum power point tracking (MPPT) circuit with analog driving scheme for the collection of both solar and vibration energy sources. Since the MPPT and the power management module are utilized, the system is able to effectively achieve a low power consumption. Furthermore, a super capacitor is integrated in the system so that current fluctuations of the lithium polymer battery during the charging and discharging processes can be properly reduced. In addition, through a simple analog switch circuit with low power consumption, the proposed system can successfully switch the power supply path according to the ambient energy sources and load power automatically. A practical WSNs platform shows that efficiency of the energy harvesting system can reach about 75–85% through the 24-hour environmental test, which confirms that the proposed system can be used as a long-term continuous power supply for WSNs. PMID:25032233
Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network.
Han, Changcai; Yang, Jinsheng
2017-10-30
The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes.
Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network
Han, Changcai; Yang, Jinsheng
2017-01-01
The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes. PMID:29084155
Parrish, Jared W; Shanahan, Meghan E; Schnitzer, Patricia G; Lanier, Paul; Daniels, Julie L; Marshall, Stephen W
2017-12-01
Health informatics projects combining statewide birth populations with child welfare records have emerged as a valuable approach to conducting longitudinal research of child maltreatment. The potential bias resulting from linkage misspecification, partial cohort follow-up, and outcome misclassification in these studies has been largely unexplored. This study integrated epidemiological survey and novel administrative data sources to establish the Alaska Longitudinal Child Abuse and Neglect Linkage (ALCANLink) project. Using these data we evaluated and quantified the impact of non-linkage misspecification and single source maltreatment ascertainment use on reported maltreatment risk and effect estimates. The ALCANLink project integrates the 2009-2011 Alaska Pregnancy Risk Assessment Monitoring System (PRAMS) sample with multiple administrative databases through 2014, including one novel administrative source to track out-of-state emigration. For this project we limited our analysis to the 2009 PRAMS sample. We report on the impact of linkage quality, cohort follow-up, and multisource outcome ascertainment on the incidence proportion of reported maltreatment before age 6 and hazard ratios of selected characteristics that are often available in birth cohort linkage studies of maltreatment. Failure to account for out-of-state emigration biased the incidence proportion by 12% (from 28.3% w to 25.2% w ), and the hazard ratio (HR) by as much as 33% for some risk factors. Overly restrictive linkage parameters biased the incidence proportion downwards by 43% and the HR by as much as 27% for some factors. Multi-source linkages, on the other hand, were of little benefit for improving reported maltreatment ascertainment. Using the ALCANLink data which included a novel administrative data source, we were able to observe and quantify bias to both the incidence proportion and HR in a birth cohort linkage study of reported child maltreatment. Failure to account for out-of-state emigration and low-quality linkage methods may induce bias in longitudinal data linkage studies of child maltreatment which other researchers should be aware of. In this study multi-agency linkage did not lead to substantial increased detection of reported maltreatment. The ALCANLink methodology may be a practical approach for other states interested in developing longitudinal birth cohort linkage studies of maltreatment that requires limited resources to implement, provides comprehensive data elements, and can facilitate comparability between studies.
NASA Astrophysics Data System (ADS)
Hu, Rongming; Wang, Shu; Guo, Jiao; Guo, Liankun
2018-04-01
Impervious surface area and vegetation coverage are important biophysical indicators of urban surface features which can be derived from medium-resolution images. However, remote sensing data obtained by a single sensor are easily affected by many factors such as weather conditions, and the spatial and temporal resolution can not meet the needs for soil erosion estimation. Therefore, the integrated multi-source remote sensing data are needed to carry out high spatio-temporal resolution vegetation coverage estimation. Two spatial and temporal vegetation coverage data and impervious data were obtained from MODIS and Landsat 8 remote sensing images. Based on the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM), the vegetation coverage data of two scales were fused and the data of vegetation coverage fusion (ESTARFM FVC) and impervious layer with high spatiotemporal resolution (30 m, 8 day) were obtained. On this basis, the spatial variability of the seepage-free surface and the vegetation cover landscape in the study area was measured by means of statistics and spatial autocorrelation analysis. The results showed that: 1) ESTARFM FVC and impermeable surface have higher accuracy and can characterize the characteristics of the biophysical components covered by the earth's surface; 2) The average impervious surface proportion and the spatial configuration of each area are different, which are affected by natural conditions and urbanization. In the urban area of Xi'an, which has typical characteristics of spontaneous urbanization, landscapes are fragmented and have less spatial dependence.
Developing a Cyberinfrastructure for integrated assessments of environmental contaminants.
Kaur, Taranjit; Singh, Jatinder; Goodale, Wing M; Kramar, David; Nelson, Peter
2005-03-01
The objective of this study was to design and implement prototype software for capturing field data and automating the process for reporting and analyzing the distribution of mercury. The four phase process used to design, develop, deploy and evaluate the prototype software is described. Two different development strategies were used: (1) design of a mobile data collection application intended to capture field data in a meaningful format and automate transfer into user databases, followed by (2) a re-engineering of the original software to develop an integrated database environment with improved methods for aggregating and sharing data. Results demonstrated that innovative use of commercially available hardware and software components can lead to the development of an end-to-end digital cyberinfrastructure that captures, records, stores, transmits, compiles and integrates multi-source data as it relates to mercury.
Ecosystem services of boreal forests - Carbon budget mapping at high resolution.
Akujärvi, Anu; Lehtonen, Aleksi; Liski, Jari
2016-10-01
The carbon (C) cycle of forests produces ecosystem services (ES) such as climate regulation and timber production. Mapping these ES using simple land cover -based proxies might add remarkable inaccuracy to the estimates. A framework to map the current status of the C budget of boreal forested landscapes was developed. The C stocks of biomass and soil and the annual change in these stocks were quantified in a 20 × 20 m resolution at the regional level on mineral soils in southern Finland. The fine-scale variation of the estimates was analyzed geo-statistically. The reliability of the estimates was evaluated by comparing them to measurements from the national multi-source forest inventory. The C stocks of forests increased slightly from the south coast to inland whereas the changes in these stocks were more uniform. The spatial patches of C stocks were larger than those of C stock changes. The patch size of the C stocks reflected the spatial variation in the environmental conditions, and that of the C stock changes the typical area of forest management compartments. The simulated estimates agreed well with the measurements indicating a good mapping framework performance. The mapping framework is the basis for evaluating the effects of forest management alternatives on C budget at high resolution across large spatial scales. It will be coupled with the assessment of other ES and biodiversity to study their relationships. The framework integrated a wide suite of simulation models and extensive inventory data. It provided reliable estimates of the human influence on C cycle in forested landscapes. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Kongwen; Hu, Baoxin; Robinson, Justin
2014-01-01
The emerald ash borer (EAB) poses a significant economic and environmental threat to ash trees in southern Ontario, Canada, and the northern states of the USA. It is critical that effective technologies are urgently developed to detect, monitor, and control the spread of EAB. This paper presents a methodology using multisourced data to predict potential infestations of EAB in the town of Oakville, Ontario, Canada. The information combined in this study includes remotely sensed data, such as high spatial resolution aerial imagery, commercial ground and airborne hyperspectral data, and Google Earth imagery, in addition to nonremotely sensed data, such as archived paper maps and documents. This wide range of data provides extensive information that can be used for early detection of EAB, yet their effective employment and use remain a significant challenge. A prediction function was developed to estimate the EAB infestation states of individual ash trees using three major attributes: leaf chlorophyll content, tree crown spatial pattern, and prior knowledge. Comparison between these predicted values and a ground-based survey demonstrated an overall accuracy of 62.5%, with 22.5% omission and 18.5% commission errors.
A multi-source dataset of urban life in the city of Milan and the Province of Trentino.
Barlacchi, Gianni; De Nadai, Marco; Larcher, Roberto; Casella, Antonio; Chitic, Cristiana; Torrisi, Giovanni; Antonelli, Fabrizio; Vespignani, Alessandro; Pentland, Alex; Lepri, Bruno
2015-01-01
The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.
A multi-source dataset of urban life in the city of Milan and the Province of Trentino
NASA Astrophysics Data System (ADS)
Barlacchi, Gianni; de Nadai, Marco; Larcher, Roberto; Casella, Antonio; Chitic, Cristiana; Torrisi, Giovanni; Antonelli, Fabrizio; Vespignani, Alessandro; Pentland, Alex; Lepri, Bruno
2015-10-01
The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.
Multisource energy system project
NASA Astrophysics Data System (ADS)
Dawson, R. W.; Cowan, R. A.
1987-03-01
The mission of this project is to investigate methods of providing uninterruptible power to Army communications and navigational facilities, many of which have limited access or are located in rugged terrain. Two alternatives are currently available for deploying terrestrial stand-alone power systems: (1) conventional electric systems powered by diesel fuel, propane, or natural gas, and (2) alternative power systems using renewable energy sources such as solar photovoltaics (PV) or wind turbines (WT). The increased cost of fuels for conventional systems and the high cost of energy storage for single-source renewable energy systems have created interest in the hybrid or multisource energy system. This report will provide a summary of the first and second interim reports, final test results, and a user's guide for software that will assist in applying and designing multi-source energy systems.
A multi-source dataset of urban life in the city of Milan and the Province of Trentino
Barlacchi, Gianni; De Nadai, Marco; Larcher, Roberto; Casella, Antonio; Chitic, Cristiana; Torrisi, Giovanni; Antonelli, Fabrizio; Vespignani, Alessandro; Pentland, Alex; Lepri, Bruno
2015-01-01
The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others. PMID:26528394
Emke, Amanda R; Cheng, Steven; Chen, Ling; Tian, Dajun; Dufault, Carolyn
2017-01-01
Phenomenon: Professionalism is integral to the role of the physician. Most professionalism assessments in medical training are delayed until clinical rotations where multisource feedback is available. This leaves a gap in student assessment portfolios and potentially delays professional development. A total of 246 second-year medical students (2013-2015) completed self- and peer assessments of professional behaviors in 2 courses following a series of Team-Based Learning exercises. Correlation and regression analyses were used to examine the alignment or misalignment in the relationship between the 2 types of assessments. Four subgroups were formed based on observed patterns of initial self- and peer assessment alignment or misalignment, and subgroup membership stability over time was assessed. A missing data analysis examined differences between average peer assessment scores as a function of selective nonparticipation. Spearman correlation demonstrated moderate to strong correlation between self-assessments completed alone (no simultaneous peer assessment) and self-assessments completed at the time of peer assessments (ρ = .59, p < .0001) but weak correlation between the two self-assessments and peer assessments (alone: ρ = .13, p < .013; at time of peer: ρ = .21, p < .0001). Generalized estimating equation models revealed that self-assessments done alone (p < .0001) were a significant predictor of self-assessments done at the time of peer. Course was also a significant predictor (p = .01) of self-assessment scores done at the time of peer. Peer assessment score was not a significant predictor. Bhapkar's test revealed subgroup membership based on the relationship between self- and peer ratings was relatively stable across Time 1 and Time 2 assessments (χ 2 = 0.83, p = .84) for all but one subgroup; members of the subgroup with initially high self-assessment and low peer assessment were significantly more likely to move to a new classification at the second measurement. A missing data analysis revealed that students who completed all self-assessments had significantly higher average peer assessment ratings compared to students who completed one or no self-assessments with a difference of -0.32, 95% confidence interval [-0.48, -0.15]. Insights: Multiple measurements of simultaneous self- and peer assessment identified a subgroup of students who consistently rated themselves higher on professionalism attributes relative to the low ratings given by their peers. This subgroup of preclinical students, along with those who elected to not complete self-assessments, may be at risk for professionalism concerns. Use of this multisource feedback tool to measure perceptual stability of professionalism behaviors is a new approach that may assist with early identification of at-risk students during preclinical years.
Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework
Talluto, Matthew V.; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C. Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A.; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique
2016-01-01
Aim Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Location Eastern North America (as an example). Methods Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple (Acer saccharum), an abundant tree native to eastern North America. Results For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. Main conclusions We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making. PMID:27499698
Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework.
Talluto, Matthew V; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique
2016-02-01
Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Eastern North America (as an example). Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple ( Acer saccharum ), an abundant tree native to eastern North America. For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making.
NASA Technical Reports Server (NTRS)
Brooks, Colin; Bourgeau-Chavez, Laura; Endres, Sarah; Battaglia, Michael; Shuchman, Robert
2015-01-01
Primary Goal: Assist with the evaluation and measuring of wetlands hydroperiod at the PlumBrook Station using multi-source remote sensing data as part of a larger effort on projecting climate change-related impacts on the station's wetland ecosystems. MTRI expanded on the multi-source remote sensing capabilities to help estimate and measure hydroperiod and the relative soil moisture of wetlands at NASA's Plum Brook Station. Multi-source remote sensing capabilities are useful in estimating and measuring hydroperiod and relative soil moisture of wetlands. This is important as a changing regional climate has several potential risks for wetland ecosystem function. The year two analysis built on the first year of the project by acquiring and analyzing remote sensing data for additional dates and types of imagery, combined with focused field work. Five deliverables were planned and completed: 1) Show the relative length of hydroperiod using available remote sensing datasets 2) Date linked table of wetlands extent over time for all feasible non-forested wetlands 3) Utilize LIDAR data to measure topographic height above sea level of all wetlands, wetland to catchment area radio, slope of wetlands, and other useful variables 4) A demonstration of how analyzed results from multiple remote sensing data sources can help with wetlands vulnerability assessment 5) A MTRI style report summarizing year 2 results. This report serves as a descriptive summary of our completion of these our deliverables. Additionally, two formal meetings were held with Larry Liou and Amanda Sprinzl to provide project updates and receive direction on outputs. These were held on 2/26/15 and 9/17/15 at the Plum Brook Station. Principal Component Analysis (PCA) is a multivariate statistical technique used to identify dominant spatial and temporal backscatter signatures. PCA reduces the information contained in the temporal dataset to the first few new Principal Component (PC) images. Some advantages of PCA include the ability to filter out temporal autocorrelation and reduce speckle to the higher order PC images. A PCA was performed using ERDAS Imagine on a time series of PALSAR dates. Hydroperiod maps were created by separating the PALSAR dates into two date ranges, 2006-2008 and 2010, and performing an unsupervised classification on the PCAs.
Lai, Michelle Mei Yee; Roberts, Noel; Martin, Jenepher
2014-09-17
Oral feedback from clinical educators is the traditional teaching method for improving clinical consultation skills in medical students. New approaches are needed to enhance this teaching model. Multisource feedback is a commonly used assessment method for learning among practising clinicians, but this assessment has not been explored rigorously in medical student education. This study seeks to evaluate if additional feedback on patient satisfaction improves medical student performance. The Patient Teaching Associate (PTA) Feedback Study is a single site randomized controlled, double-blinded trial with two parallel groups.An after-hours general practitioner clinic in Victoria, Australia, is adapted as a teaching clinic during the day. Medical students from two universities in their first clinical year participate in six simulated clinical consultations with ambulatory patient volunteers living with chronic illness. Eligible students will be randomized in equal proportions to receive patient satisfaction score feedback with the usual multisource feedback and the usual multisource feedback alone as control. Block randomization will be performed. We will assess patient satisfaction and consultation performance outcomes at baseline and after one semester and will compare any change in mean scores at the last session from that at baseline. We will model data using regression analysis to determine any differences between intervention and control groups. Full ethical approval has been obtained for the study. This trial will comply with CONSORT guidelines and we will disseminate data at conferences and in peer-reviewed journals. This is the first proposed trial to determine whether consumer feedback enhances the use of multisource feedback in medical student education, and to assess the value of multisource feedback in teaching and learning about the management of ambulatory patients living with chronic conditions. Australian New Zealand Clinical Trials Registry (ANZCTR): ACTRN12613001055796.
Multisource feedback analysis of pediatric outpatient teaching
2013-01-01
Background This study aims to evaluate the outpatient communication skills of medical students via multisource feedback, which may be useful to map future directions in improving physician-patient communication. Methods Family respondents of patients, a nurse, a clinical teacher, and a research assistant evaluated video-recorded medical students’ interactions with outpatients by using multisource feedback questionnaires; students also assessed their own skills. The questionnaire was answered based on the video-recorded interactions between outpatients and the medical students. Results A total of 60 family respondents of the 60 patients completed the questionnaires, 58 (96.7%) of them agreed with the video recording. Two reasons for reluctance were “personal privacy” issues and “simply disagree” with the video recording. The average satisfaction score of the 58 students was 85.1 points, indicating students’ performance was in the category between satisfied and very satisfied. The family respondents were most satisfied with the “teacher”s attitude,“ followed by ”teaching quality”. In contrast, the family respondents were least satisfied with “being open to questions”. Among the 6 assessment domains of communication skills, the students scored highest on “explaining” and lowest on “giving recommendations”. In the detailed assessment by family respondents, the students scored lowest on “asking about life/school burden”. In the multisource analysis, the nurses’ mean score was much higher and the students’ mean self-assessment score was lower than the average scores on all domains. Conclusion The willingness and satisfaction of family respondents were high in this study. Students scored the lowest on giving recommendations to patients. Multisource feedback with video recording is useful in providing more accurate evaluation of students’ communication competence and in identifying the areas of communication that require enhancement. PMID:24180615
A parametric method for determining the number of signals in narrow-band direction finding
NASA Astrophysics Data System (ADS)
Wu, Qiang; Fuhrmann, Daniel R.
1991-08-01
A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).
Multisource inverse-geometry CT. Part II. X-ray source design and prototype
Neculaes, V. Bogdan; Caiafa, Antonio; Cao, Yang; De Man, Bruno; Edic, Peter M.; Frutschy, Kristopher; Gunturi, Satish; Inzinna, Lou; Reynolds, Joseph; Vermilyea, Mark; Wagner, David; Zhang, Xi; Zou, Yun; Pelc, Norbert J.; Lounsberry, Brian
2016-01-01
Purpose: This paper summarizes the development of a high-power distributed x-ray source, or “multisource,” designed for inverse-geometry computed tomography (CT) applications [see B. De Man et al., “Multisource inverse-geometry CT. Part I. System concept and development,” Med. Phys. 43, 4607–4616 (2016)]. The paper presents the evolution of the source architecture, component design (anode, emitter, beam optics, control electronics, high voltage insulator), and experimental validation. Methods: Dispenser cathode emitters were chosen as electron sources. A modular design was adopted, with eight electron emitters (two rows of four emitters) per module, wherein tungsten targets were brazed onto copper anode blocks—one anode block per module. A specialized ceramic connector provided high voltage standoff capability and cooling oil flow to the anode. A matrix topology and low-noise electronic controls provided switching of the emitters. Results: Four modules (32 x-ray sources in two rows of 16) have been successfully integrated into a single vacuum vessel and operated on an inverse-geometry computed tomography system. Dispenser cathodes provided high beam current (>1000 mA) in pulse mode, and the electrostatic lenses focused the current beam to a small optical focal spot size (0.5 × 1.4 mm). Controlled emitter grid voltage allowed the beam current to be varied for each source, providing the ability to modulate beam current across the fan of the x-ray beam, denoted as a virtual bowtie filter. The custom designed controls achieved x-ray source switching in <1 μs. The cathode-grounded source was operated successfully up to 120 kV. Conclusions: A high-power, distributed x-ray source for inverse-geometry CT applications was successfully designed, fabricated, and operated. Future embodiments may increase the number of spots and utilize fast read out detectors to increase the x-ray flux magnitude further, while still staying within the stationary target inherent thermal limitations. PMID:27487878
Luck, Margaux; Bertho, Gildas; Bateson, Mathilde; Karras, Alexandre; Yartseva, Anastasia; Thervet, Eric
2016-01-01
1H Nuclear Magnetic Resonance (NMR)-based metabolic profiling is very promising for the diagnostic of the stages of chronic kidney disease (CKD). Because of the high dimension of NMR spectra datasets and the complex mixture of metabolites in biological samples, the identification of discriminant biomarkers of a disease is challenging. None of the widely used chemometric methods in NMR metabolomics performs a local exhaustive exploration of the data. We developed a descriptive and easily understandable approach that searches for discriminant local phenomena using an original exhaustive rule-mining algorithm in order to predict two groups of patients: 1) patients having low to mild CKD stages with no renal failure and 2) patients having moderate to established CKD stages with renal failure. Our predictive algorithm explores the m-dimensional variable space to capture the local overdensities of the two groups of patients under the form of easily interpretable rules. Afterwards, a L2-penalized logistic regression on the discriminant rules was used to build predictive models of the CKD stages. We explored a complex multi-source dataset that included the clinical, demographic, clinical chemistry, renal pathology and urine metabolomic data of a cohort of 110 patients. Given this multi-source dataset and the complex nature of metabolomic data, we analyzed 1- and 2-dimensional rules in order to integrate the information carried by the interactions between the variables. The results indicated that our local algorithm is a valuable analytical method for the precise characterization of multivariate CKD stage profiles and as efficient as the classical global model using chi2 variable section with an approximately 70% of good classification level. The resulting predictive models predominantly identify urinary metabolites (such as 3-hydroxyisovalerate, carnitine, citrate, dimethylsulfone, creatinine and N-methylnicotinamide) as relevant variables indicating that CKD significantly affects the urinary metabolome. In addition, the simple knowledge of the concentration of urinary metabolites classifies the CKD stage of the patients correctly. PMID:27861591
Multisource inverse-geometry CT. Part II. X-ray source design and prototype
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neculaes, V. Bogdan, E-mail: neculaes@ge.com; Caia
2016-08-15
Purpose: This paper summarizes the development of a high-power distributed x-ray source, or “multisource,” designed for inverse-geometry computed tomography (CT) applications [see B. De Man et al., “Multisource inverse-geometry CT. Part I. System concept and development,” Med. Phys. 43, 4607–4616 (2016)]. The paper presents the evolution of the source architecture, component design (anode, emitter, beam optics, control electronics, high voltage insulator), and experimental validation. Methods: Dispenser cathode emitters were chosen as electron sources. A modular design was adopted, with eight electron emitters (two rows of four emitters) per module, wherein tungsten targets were brazed onto copper anode blocks—one anode blockmore » per module. A specialized ceramic connector provided high voltage standoff capability and cooling oil flow to the anode. A matrix topology and low-noise electronic controls provided switching of the emitters. Results: Four modules (32 x-ray sources in two rows of 16) have been successfully integrated into a single vacuum vessel and operated on an inverse-geometry computed tomography system. Dispenser cathodes provided high beam current (>1000 mA) in pulse mode, and the electrostatic lenses focused the current beam to a small optical focal spot size (0.5 × 1.4 mm). Controlled emitter grid voltage allowed the beam current to be varied for each source, providing the ability to modulate beam current across the fan of the x-ray beam, denoted as a virtual bowtie filter. The custom designed controls achieved x-ray source switching in <1 μs. The cathode-grounded source was operated successfully up to 120 kV. Conclusions: A high-power, distributed x-ray source for inverse-geometry CT applications was successfully designed, fabricated, and operated. Future embodiments may increase the number of spots and utilize fast read out detectors to increase the x-ray flux magnitude further, while still staying within the stationary target inherent thermal limitations.« less
NASA Astrophysics Data System (ADS)
Eberle, J.; Schmullius, C.
2017-12-01
Increasing archives of global satellite data present a new challenge to handle multi-source satellite data in a user-friendly way. Any user is confronted with different data formats and data access services. In addition the handling of time-series data is complex as an automated processing and execution of data processing steps is needed to supply the user with the desired product for a specific area of interest. In order to simplify the access to data archives of various satellite missions and to facilitate the subsequent processing, a regional data and processing middleware has been developed. The aim of this system is to provide standardized and web-based interfaces to multi-source time-series data for individual regions on Earth. For further use and analysis uniform data formats and data access services are provided. Interfaces to data archives of the sensor MODIS (NASA) as well as the satellites Landsat (USGS) and Sentinel (ESA) have been integrated in the middleware. Various scientific algorithms, such as the calculation of trends and breakpoints of time-series data, can be carried out on the preprocessed data on the basis of uniform data management. Jupyter Notebooks are linked to the data and further processing can be conducted directly on the server using Python and the statistical language R. In addition to accessing EO data, the middleware is also used as an intermediary between the user and external databases (e.g., Flickr, YouTube). Standardized web services as specified by OGC are provided for all tools of the middleware. Currently, the use of cloud services is being researched to bring algorithms to the data. As a thematic example, an operational monitoring of vegetation phenology is being implemented on the basis of various optical satellite data and validation data from the German Weather Service. Other examples demonstrate the monitoring of wetlands focusing on automated discovery and access of Landsat and Sentinel data for local areas.
NASA Astrophysics Data System (ADS)
Ni, X. Y.; Huang, H.; Du, W. P.
2017-02-01
The PM2.5 problem is proving to be a major public crisis and is of great public-concern requiring an urgent response. Information about, and prediction of PM2.5 from the perspective of atmospheric dynamic theory is still limited due to the complexity of the formation and development of PM2.5. In this paper, we attempted to realize the relevance analysis and short-term prediction of PM2.5 concentrations in Beijing, China, using multi-source data mining. A correlation analysis model of PM2.5 to physical data (meteorological data, including regional average rainfall, daily mean temperature, average relative humidity, average wind speed, maximum wind speed, and other pollutant concentration data, including CO, NO2, SO2, PM10) and social media data (microblog data) was proposed, based on the Multivariate Statistical Analysis method. The study found that during these factors, the value of average wind speed, the concentrations of CO, NO2, PM10, and the daily number of microblog entries with key words 'Beijing; Air pollution' show high mathematical correlation with PM2.5 concentrations. The correlation analysis was further studied based on a big data's machine learning model- Back Propagation Neural Network (hereinafter referred to as BPNN) model. It was found that the BPNN method performs better in correlation mining. Finally, an Autoregressive Integrated Moving Average (hereinafter referred to as ARIMA) Time Series model was applied in this paper to explore the prediction of PM2.5 in the short-term time series. The predicted results were in good agreement with the observed data. This study is useful for helping realize real-time monitoring, analysis and pre-warning of PM2.5 and it also helps to broaden the application of big data and the multi-source data mining methods.
Greenbaum, Rebecca L; Quade, Matthew J; Mawritz, Mary B; Kim, Joongseo; Crosby, Durand
2014-11-01
We integrate deontological ethics (Folger, 1998, 2001; Kant, 1785/1948, 1797/1991) with conservation of resources theory (Hobfoll, 1989) to propose that an employee's repeated exposure to violations of moral principle can diminish the availability of resources to appropriately attend to other personal and work domains. In particular, we identify customer unethical behavior as a morally charged work demand that leads to a depletion of resources as captured by employee emotional exhaustion. In turn, emotionally exhausted employees experience higher levels of work-family conflict, relationship conflict with coworkers, and job neglect. Employee emotional exhaustion serves as the mediator between customer unethical behavior and such outcomes. To provide further evidence of a deontological effect, we demonstrate the unique effect of customer unethical behavior onto emotional exhaustion beyond perceptions of personal mistreatment and trait negative affectivity. In Study 1, we found support for our theoretical model using multisource field data from customer-service professionals across a variety of industries. In Study 2, we also found support for our theoretical model using multisource, longitudinal field data from service employees in a large government organization. Theoretical and practical implications are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Introduction to Remote Sensing Image Registration
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline
2017-01-01
For many applications, accurate and fast image registration of large amounts of multi-source data is the first necessary step before subsequent processing and integration. Image registration is defined by several steps and each step can be approached by various methods which all present diverse advantages and drawbacks depending on the type of data, the type of applications, the a prior information known about the data and the type of accuracy that is required. This paper will first present a general overview of remote sensing image registration and then will go over a few specific methods and their applications
ERIC Educational Resources Information Center
Burns, G. Leonard; Desmul, Chris; Walsh, James A.; Silpakit, Chatchawan; Ussahawanitchakit, Phapruke
2009-01-01
Confirmatory factor analysis was used with a multitrait (attention-deficit/hyperactivity disorder-inattention, attention-deficit/hyperactivity disorder-hyperactivity/impulsivity, oppositional defiant disorder toward adults, academic competence, and social competence) by multisource (mothers and fathers) matrix to test the invariance and…
Ng, Kok-Yee; Koh, Christine; Ang, Soon; Kennedy, Jeffrey C; Chan, Kim-Yin
2011-09-01
This study extends multisource feedback research by assessing the effects of rater source and raters' cultural value orientations on rating bias (leniency and halo). Using a motivational perspective of performance appraisal, the authors posit that subordinate raters followed by peers will exhibit more rating bias than superiors. More important, given that multisource feedback systems were premised on low power distance and individualistic cultural assumptions, the authors expect raters' power distance and individualism-collectivism orientations to moderate the effects of rater source on rating bias. Hierarchical linear modeling on data collected from 1,447 superiors, peers, and subordinates who provided developmental feedback to 172 military officers show that (a) subordinates exhibit the most rating leniency, followed by peers and superiors; (b) subordinates demonstrate more halo than superiors and peers, whereas superiors and peers do not differ; (c) the effects of power distance on leniency and halo are strongest for subordinates than for peers and superiors; (d) the effects of collectivism on leniency were stronger for subordinates and peers than for superiors; effects on halo were stronger for subordinates than superiors, but these effects did not differ for subordinates and peers. The present findings highlight the role of raters' cultural values in multisource feedback ratings. PsycINFO Database Record (c) 2011 APA, all rights reserved
Velpuri, N.M.; Senay, G.B.; Asante, K.O.
2011-01-01
Managing limited surface water resources is a great challenge in areas where ground-based data are either limited or unavailable. Direct or indirect measurements of surface water resources through remote sensing offer several advantages of monitoring in ungauged basins. A physical based hydrologic technique to monitor lake water levels in ungauged basins using multi-source satellite data such as satellite-based rainfall estimates, modelled runoff, evapotranspiration, a digital elevation model, and other data is presented. This approach is applied to model Lake Turkana water levels from 1998 to 2009. Modelling results showed that the model can reasonably capture all the patterns and seasonal variations of the lake water level fluctuations. A composite lake level product of TOPEX/Poseidon, Jason-1, and ENVISAT satellite altimetry data is used for model calibration (1998-2000) and model validation (2001-2009). Validation results showed that model-based lake levels are in good agreement with observed satellite altimetry data. Compared to satellite altimetry data, the Pearson's correlation coefficient was found to be 0.81 during the validation period. The model efficiency estimated using NSCE is found to be 0.93, 0.55 and 0.66 for calibration, validation and combined periods, respectively. Further, the model-based estimates showed a root mean square error of 0.62 m and mean absolute error of 0.46 m with a positive mean bias error of 0.36 m for the validation period (2001-2009). These error estimates were found to be less than 15 % of the natural variability of the lake, thus giving high confidence on the modelled lake level estimates. The approach presented in this paper can be used to (a) simulate patterns of lake water level variations in data scarce regions, (b) operationally monitor lake water levels in ungauged basins, (c) derive historical lake level information using satellite rainfall and evapotranspiration data, and (d) augment the information provided by the satellite altimetry systems on changes in lake water levels. ?? Author(s) 2011.
2010-07-01
Multisource Information Fusion ( CMIF ) along with a team including the Pennsylvania State University (PSU), Iona College (Iona), and Tennessee State...License. 14. ABSTRACT The University at Buffalo (UB) Center for Multisource Information Fusion ( CMIF ) along with a team including the Pennsylvania...of CMIF current research on methods for Test and Evaluation ([7], [8]) involving for example large- factor-space experimental design techniques ([9
Enhancing the performance of regional land cover mapping
NASA Astrophysics Data System (ADS)
Wu, Weicheng; Zucca, Claudio; Karam, Fadi; Liu, Guangping
2016-10-01
Different pixel-based, object-based and subpixel-based methods such as time-series analysis, decision-tree, and different supervised approaches have been proposed to conduct land use/cover classification. However, despite their proven advantages in small dataset tests, their performance is variable and less satisfactory while dealing with large datasets, particularly, for regional-scale mapping with high resolution data due to the complexity and diversity in landscapes and land cover patterns, and the unacceptably long processing time. The objective of this paper is to demonstrate the comparatively highest performance of an operational approach based on integration of multisource information ensuring high mapping accuracy in large areas with acceptable processing time. The information used includes phenologically contrasted multiseasonal and multispectral bands, vegetation index, land surface temperature, and topographic features. The performance of different conventional and machine learning classifiers namely Malahanobis Distance (MD), Maximum Likelihood (ML), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Random Forests (RFs) was compared using the same datasets in the same IDL (Interactive Data Language) environment. An Eastern Mediterranean area with complex landscape and steep climate gradients was selected to test and develop the operational approach. The results showed that SVMs and RFs classifiers produced most accurate mapping at local-scale (up to 96.85% in Overall Accuracy), but were very time-consuming in whole-scene classification (more than five days per scene) whereas ML fulfilled the task rapidly (about 10 min per scene) with satisfying accuracy (94.2-96.4%). Thus, the approach composed of integration of seasonally contrasted multisource data and sampling at subclass level followed by a ML classification is a suitable candidate to become an operational and effective regional land cover mapping method.
NASA Astrophysics Data System (ADS)
Coburn, C. A.; Qin, Y.; Zhang, J.; Staenz, K.
2015-12-01
Food security is one of the most pressing issues facing humankind. Recent estimates predict that over one billion people don't have enough food to meet their basic nutritional needs. The ability of remote sensing tools to monitor and model crop production and predict crop yield is essential for providing governments and farmers with vital information to ensure food security. Google Earth Engine (GEE) is a cloud computing platform, which integrates storage and processing algorithms for massive remotely sensed imagery and vector data sets. By providing the capabilities of storing and analyzing the data sets, it provides an ideal platform for the development of advanced analytic tools for extracting key variables used in regional and national food security systems. With the high performance computing and storing capabilities of GEE, a cloud-computing based system for near real-time crop land monitoring was developed using multi-source remotely sensed data over large areas. The system is able to process and visualize the MODIS time series NDVI profile in conjunction with Landsat 8 image segmentation for crop monitoring. With multi-temporal Landsat 8 imagery, the crop fields are extracted using the image segmentation algorithm developed by Baatz et al.[1]. The MODIS time series NDVI data are modeled by TIMESAT [2], a software package developed for analyzing time series of satellite data. The seasonality of MODIS time series data, for example, the start date of the growing season, length of growing season, and NDVI peak at a field-level are obtained for evaluating the crop-growth conditions. The system fuses MODIS time series NDVI data and Landsat 8 imagery to provide information of near real-time crop-growth conditions through the visualization of MODIS NDVI time series and comparison of multi-year NDVI profiles. Stakeholders, i.e., farmers and government officers, are able to obtain crop-growth information at crop-field level online. This unique utilization of GEE in combination with advanced analytic and extraction techniques provides a vital remote sensing tool for decision makers and scientists with a high-degree of flexibility to adapt to different uses.
NASA Astrophysics Data System (ADS)
Renschler, Chris S.; Wang, Zhihao
2017-10-01
In light of climate and land use change, stakeholders around the world are interested in assessing historic and likely future flood dynamics and flood extents for decision-making in watersheds with dams as well as limited availability of stream gages and costly technical resources. This research evaluates an assessment and communication approach of combining GIS, hydraulic modeling based on latest remote sensing and topographic imagery by comparing the results to an actual flood event and available stream gages. On August 28th 2011, floods caused by Hurricane Irene swept through a large rural area in New York State, leaving thousands of people homeless, devastating towns and cities. Damage was widespread though the estimated and actual floods inundation and associated return period were still unclear since the flooding was artificially increased by flood water release due to fear of a dam break. This research uses the stream section right below the dam between two stream gages North Blenheim and Breakabeen along Schoharie Creek as a case study site to validate the approach. The data fusion approach uses a GIS, commonly available data sources, the hydraulic model HEC-RAS as well as airborne LiDAR data that were collected two days after the flood event (Aug 30, 2011). The aerial imagery of the airborne survey depicts a low flow event as well as the evidence of the record flood such as debris and other signs of damage to validate the hydrologic simulation results with the available stream gauges. Model results were also compared to the official Federal Emergency Management Agency (FEMA) flood scenarios to determine the actual flood return period of the event. The dynamic of the flood levels was then used to visualize the flood and the actual loss of the Old Blenheim Bridge using Google Sketchup. Integration of multi-source data, cross-validation and visualization provides new ways to utilize pre- and post-event remote sensing imagery and hydrologic models to better understand and communicate the complex spatial-temporal dynamics, return periods and potential/actual consequences to decision-makers and the local population.
Sound Localization in Multisource Environments
2009-03-01
A total of 7 paid volunteer listeners (3 males and 4 females, 20-25 years of age ) par- ticipated in the experiment. All had normal hearing (i.e...effects of the loudspeaker frequency responses, and were then sent from an experimental control computer to a Mark of the Unicorn (MOTU 24 I/O) digital-to...after the overall multisource stimulus has been presented (the ’post-cue’ condition). 3.2 Methods 3.2.1 Listeners Eight listeners, ranging in age from
A beam optics study of a modular multi-source X-ray tube for novel computed tomography applications
NASA Astrophysics Data System (ADS)
Walker, Brandon J.; Radtke, Jeff; Chen, Guang-Hong; Eliceiri, Kevin W.; Mackie, Thomas R.
2017-10-01
A modular implementation of a scanning multi-source X-ray tube is designed for the increasing number of multi-source imaging applications in computed tomography (CT). An electron beam array coupled with an oscillating magnetic deflector is proposed as a means for producing an X-ray focal spot at any position along a line. The preliminary multi-source model includes three thermionic electron guns that are deflected in tandem by a slowly varying magnetic field and pulsed according to a scanning sequence that is dependent on the intended imaging application. Particle tracking simulations with particle dynamics analysis software demonstrate that three 100 keV electron beams are laterally swept a combined distance of 15 cm over a stationary target with an oscillating magnetic field of 102 G perpendicular to the beam axis. Beam modulation is accomplished using 25 μs pulse widths to a grid electrode with a reverse gate bias of -500 V and an extraction voltage of +1000 V. Projected focal spot diameters are approximately 1 mm for 138 mA electron beams and the stationary target stays within thermal limits for the 14 kW module. This concept could be used as a research platform for investigating high-speed stationary CT scanners, for lowering dose with virtual fan beam formation, for reducing scatter radiation in cone-beam CT, or for other industrial applications.
General practitioner registrars' experiences of multisource feedback: a qualitative study.
Findlay, Nigel
2012-09-01
To explore the experiences of general practitioner (GP) specialty training registrars, thereby generating more understanding of the ways in which multisource feedback impacts upon their self-perceptions and professional behaviour, and provide information that might guide its use in the revalidation process of practising GPs. Complete transcripts of semi-structured, audio-taped qualitative interviews were analysed using the constant comparative method, to describe the experiences of multisource feedback for individual registrars. Five GP registrars participated. The first theme to emerge was the importance of the educational supervisor in encouraging the registrar through the emotional response, then facilitating interpretation of feedback and personal development. The second was the differing attitudes to learning and development, which may be in conflict with threats to self-image. The current RCGP format for obtaining multisource feedback for GP registrars may not always be achieving its purpose of challenging self-perceptions and motivating improved performance. An enhanced qualitative approach, through personal interviews rather than anonymous questionnaires, may provide a more accurate picture. This would address the concerns of some registrars by reducing their logistical burden and may facilitate more constructive feedback. The educational supervisor has an important role in promoting personal development, once this feedback is shared. The challenge for teaching organisations is to create a climate of comfort for learning, yet encourage learning beyond a 'comfort zone'.
Multisource Estimation of Long-term Global Terrestrial Surface Radiation
NASA Astrophysics Data System (ADS)
Peng, L.; Sheffield, J.
2017-12-01
Land surface net radiation is the essential energy source at the earth's surface. It determines the surface energy budget and its partitioning, drives the hydrological cycle by providing available energy, and offers heat, light, and energy for biological processes. Individual components in net radiation have changed historically due to natural and anthropogenic climate change and land use change. Decadal variations in radiation such as global dimming or brightening have important implications for hydrological and carbon cycles. In order to assess the trends and variability of net radiation and evapotranspiration, there is a need for accurate estimates of long-term terrestrial surface radiation. While large progress in measuring top of atmosphere energy budget has been made, huge discrepancies exist among ground observations, satellite retrievals, and reanalysis fields of surface radiation, due to the lack of observational networks, the difficulty in measuring from space, and the uncertainty in algorithm parameters. To overcome the weakness of single source datasets, we propose a multi-source merging approach to fully utilize and combine multiple datasets of radiation components separately, as they are complementary in space and time. First, we conduct diagnostic analysis of multiple satellite and reanalysis datasets based on in-situ measurements such as Global Energy Balance Archive (GEBA), existing validation studies, and other information such as network density and consistency with other meteorological variables. Then, we calculate the optimal weighted average of multiple datasets by minimizing the variance of error between in-situ measurements and other observations. Finally, we quantify the uncertainties in the estimates of surface net radiation and employ physical constraints based on the surface energy balance to reduce these uncertainties. The final dataset is evaluated in terms of the long-term variability and its attribution to changes in individual components. The goal of this study is to provide a merged observational benchmark for large-scale diagnostic analyses, remote sensing and land surface modeling.
Ultra-compact coherent receiver with serial interface for pluggable transceiver.
Itoh, Toshihiro; Nakajima, Fumito; Ohno, Tetsuichiro; Yamanaka, Shogo; Soma, Shunichi; Saida, Takashi; Nosaka, Hideyuki; Murata, Koichi
2014-09-22
An ultra-compact integrated coherent receiver with a volume of 1.3 cc using a quad-channel transimpedance amplifier (TIA)-IC chip with a serial peripheral interface (SPI) is demonstrated for the first time. The TIA with the SPI and photodiode (PD) bias circuits, a miniature dual polarization optical hybrid, an octal-PD and small optical coupling system enabled the realization of the compact receiver. Measured transmission performance with 32 Gbaud dual-polarization quadrature phase shift keying signal is equivalent to that of the conventional multi-source agreement-based integrated coherent receiver with dual channel TIA-ICs. By comparing the bit-error rate (BER) performance with that under continuous SPI access, we also confirmed that there is no BER degradation caused by SPI interface access. Such an ultra-compact receiver is promising for realizing a new generation of pluggable transceivers.
WE-DE-201-08: Multi-Source Rotating Shield Brachytherapy Apparatus for Prostate Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dadkhah, H; Wu, X; Kim, Y
Purpose: To introduce a novel multi-source rotating shield brachytherapy (RSBT) apparatus for the precise simultaneous angular and linear positioning of all partially-shielded 153Gd radiation sources in interstitial needles for treating prostate cancer. The mechanism is designed to lower the detrimental dose to healthy tissues, the urethra in particular, relative to conventional high-dose-rate brachytherapy (HDR-BT) techniques. Methods: Following needle implantation, the delivery system is docked to the patient template. Each needle is coupled to a multi-source afterloader catheter by a connector passing through a shaft. The shafts are rotated by translating a moving template between two stationary templates. Shaft walls asmore » well as moving template holes are threaded such that the resistive friction produced between the two parts exerts enough force on the shafts to bring about the rotation. Rotation of the shaft is then transmitted to the shielded source via several keys. Thus, shaft angular position is fully correlated with the position of the moving template. The catheter angles are simultaneously incremented throughout treatment as needed, and only a single 360° rotation of all catheters is needed for a full treatment. For each rotation angle, source depth in each needle is controlled by a multi-source afterloader, which is proposed as an array of belt-driven linear actuators, each of which drives a source wire. Results: Optimized treatment plans based on Monte Carlo dose calculations demonstrated RSBT with the proposed apparatus reduced urethral D{sub 1cc} below that of conventional HDR-BT by 35% for urethral dose gradient volume within 3 mm of the urethra surface. Treatment time to deliver 20 Gy with multi-source RSBT apparatus using nineteen 62.4 GBq {sup 153}Gd sources is 117 min. Conclusions: The proposed RSBT delivery apparatus in conjunction with multiple nitinol catheter-mounted platinum-shielded {sup 153}Gd sources enables a mechanically feasible urethra-sparing treatment technique for prostate cancer in a clinically reasonable timeframe.« less
SU-D-210-03: Limited-View Multi-Source Quantitative Photoacoustic Tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, J; Gao, H
2015-06-15
Purpose: This work is to investigate a novel limited-view multi-source acquisition scheme for the direct and simultaneous reconstruction of optical coefficients in quantitative photoacoustic tomography (QPAT), which has potentially improved signal-to-noise ratio and reduced data acquisition time. Methods: Conventional QPAT is often considered in two steps: first to reconstruct the initial acoustic pressure from the full-view ultrasonic data after each optical illumination, and then to quantitatively reconstruct optical coefficients (e.g., absorption and scattering coefficients) from the initial acoustic pressure, using multi-source or multi-wavelength scheme.Based on a novel limited-view multi-source scheme here, We have to consider the direct reconstruction of opticalmore » coefficients from the ultrasonic data, since the initial acoustic pressure can no longer be reconstructed as an intermediate variable due to the incomplete acoustic data in the proposed limited-view scheme. In this work, based on a coupled photo-acoustic forward model combining diffusion approximation and wave equation, we develop a limited-memory Quasi-Newton method (LBFGS) for image reconstruction that utilizes the adjoint forward problem for fast computation of gradients. Furthermore, the tensor framelet sparsity is utilized to improve the image reconstruction which is solved by Alternative Direction Method of Multipliers (ADMM). Results: The simulation was performed on a modified Shepp-Logan phantom to validate the feasibility of the proposed limited-view scheme and its corresponding image reconstruction algorithms. Conclusion: A limited-view multi-source QPAT scheme is proposed, i.e., the partial-view acoustic data acquisition accompanying each optical illumination, and then the simultaneous rotations of both optical sources and ultrasonic detectors for next optical illumination. Moreover, LBFGS and ADMM algorithms are developed for the direct reconstruction of optical coefficients from the acoustic data. Jing Feng and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
NASA Astrophysics Data System (ADS)
Wang, Gongwen; Ma, Zhenbo; Li, Ruixi; Song, Yaowu; Qu, Jianan; Zhang, Shouting; Yan, Changhai; Han, Jiangwei
2017-04-01
In this paper, multi-source (geophysical, geochemical, geological and remote sensing) datasets were used to construct multi-scale (district-, deposit-, and orebody-scale) 3D geological models and extract 3D exploration criteria for subsurface Mo-polymetallic exploration targeting in the Luanchuan district in China. The results indicate that (i) a series of region-/district-scale NW-trending thrusts controlled main Mo-polymetallic forming, and they were formed by regional Indosinian Qinling orogenic events, the secondary NW-trending district-scale folds and NE-trending faults and the intrusive stock structure are produced based on thrust structure in Caledonian-Indosinian orogenic events; they are ore-bearing zones and ore-forming structures; (ii) the NW-trending district-scale and NE-trending deposit-scale normal faults were crossed and controlled by the Jurassic granite stocks in 3D space, they are associated with the magma-skarn Mo polymetallic mineralization (the 3D buffer distance of ore-forming granite stocks is 600 m) and the NW-trending hydrothermal Pb-Zn deposits which are surrounded by the Jurassic granite stocks and constrained by NW-trending or NE-trending faults (the 3D buffer distance of ore-forming fault is 700 m); and (iii) nine Mo polymetallic and four Pb-Zn targets were identified in the subsurface of the Luanchuan district.
NASA Astrophysics Data System (ADS)
Wang, Feiyan; Morten, Jan Petter; Spitzer, Klaus
2018-05-01
In this paper, we present a recently developed anisotropic 3-D inversion framework for interpreting controlled-source electromagnetic (CSEM) data in the frequency domain. The framework integrates a high-order finite-element forward operator and a Gauss-Newton inversion algorithm. Conductivity constraints are applied using a parameter transformation. We discretize the continuous forward and inverse problems on unstructured grids for a flexible treatment of arbitrarily complex geometries. Moreover, an unstructured mesh is more desirable in comparison to a single rectilinear mesh for multisource problems because local grid refinement will not significantly influence the mesh density outside the region of interest. The non-uniform spatial discretization facilitates parametrization of the inversion domain at a suitable scale. For a rapid simulation of multisource EM data, we opt to use a parallel direct solver. We further accelerate the inversion process by decomposing the entire data set into subsets with respect to frequencies (and transmitters if memory requirement is affordable). The computational tasks associated with each data subset are distributed to different processes and run in parallel. We validate the scheme using a synthetic marine CSEM model with rough bathymetry, and finally, apply it to an industrial-size 3-D data set from the Troll field oil province in the North Sea acquired in 2008 to examine its robustness and practical applicability.
Design and realization of disaster assessment algorithm after forest fire
NASA Astrophysics Data System (ADS)
Xu, Aijun; Wang, Danfeng; Tang, Lihua
2008-10-01
Based on GIS technology, this paper mainly focuses on the application of disaster assessment algorithm after forest fire and studies on the design and realization of disaster assessment based on GIS. After forest fire through the analysis and processing of multi-sources and heterogeneous data, this paper integrates the foundation that the domestic and foreign scholars laid of the research on assessment for forest fire loss with the related knowledge of assessment, accounting and forest resources appraisal so as to study and approach the theory framework and assessment index of the research on assessment for forest fire loss. The technologies of extracting boundary, overlay analysis, and division processing of multi-sources spatial data are available to realize the application of the investigation method of the burnt forest area and the computation of the fire area. The assessment provides evidence for fire cleaning in burnt areas and new policy making on restoration in terms of the direct and the indirect economic loss and ecological and environmental damage caused by forest fire under the condition of different fire danger classes and different amounts of forest accumulation, thus makes forest resources protection operated in a faster, more efficient and more economical way. Finally, this paper takes Lin'an city of Zhejiang province as a test area to confirm the method mentioned in the paper in terms of key technologies.
NASA Astrophysics Data System (ADS)
Xie, Jiayu; Wang, Gongwen; Sha, Yazhou; Liu, Jiajun; Wen, Botao; Nie, Ming; Zhang, Shuai
2017-04-01
Integrating multi-source geoscience information (such as geology, geophysics, geochemistry, and remote sensing) using GIS mapping is one of the key topics and frontiers in quantitative geosciences for mineral exploration. GIS prospective mapping and three-dimensional (3D) modeling can be used not only to extract exploration criteria and delineate metallogenetic targets but also to provide important information for the quantitative assessment of mineral resources. This paper uses the Shangnan district of Shaanxi province (China) as a case study area. GIS mapping and potential granite-hydrothermal uranium targeting were conducted in the study area combining weights of evidence (WofE) and concentration-area (C-A) fractal methods with multi-source geoscience information. 3D deposit-scale modeling using GOCAD software was performed to validate the shapes and features of the potential targets at the subsurface. The research results show that: (1) the known deposits have potential zones at depth, and the 3D geological models can delineate surface or subsurface ore-forming features, which can be used to analyze the uncertainty of the shape and feature of prospectivity mapping at the subsurface; (2) single geochemistry anomalies or remote sensing anomalies at the surface require combining the depth exploration criteria of geophysics to identify potential targets; and (3) the single or sparse exploration criteria zone with few mineralization spots at the surface has high uncertainty in terms of the exploration target.
A Video Game Platform for Exploring Satellite and In-Situ Data Streams
NASA Astrophysics Data System (ADS)
Cai, Y.
2014-12-01
Exploring spatiotemporal patterns of moving objects are essential to Earth Observation missions, such as tracking, modeling and predicting movement of clouds, dust, plumes and harmful algal blooms. Those missions involve high-volume, multi-source, and multi-modal imagery data analysis. Analytical models intend to reveal inner structure, dynamics, and relationship of things. However, they are not necessarily intuitive to humans. Conventional scientific visualization methods are intuitive but limited by manual operations, such as area marking, measurement and alignment of multi-source data, which are expensive and time-consuming. A new development of video analytics platform has been in progress, which integrates the video game engine with satellite and in-situ data streams. The system converts Earth Observation data into articulated objects that are mapped from a high-dimensional space to a 3D space. The object tracking and augmented reality algorithms highlight the objects' features in colors, shapes and trajectories, creating visual cues for observing dynamic patterns. The head and gesture tracker enable users to navigate the data space interactively. To validate our design, we have used NASA SeaWiFS satellite images of oceanographic remote sensing data and NOAA's in-situ cell count data. Our study demonstrates that the video game system can reduce the size and cost of traditional CAVE systems in two to three orders of magnitude. This system can also be used for satellite mission planning and public outreaching.
Sharing Health Big Data for Research - A Design by Use Cases: The INSHARE Platform Approach.
Bouzillé, Guillaume; Westerlynck, Richard; Defossez, Gautier; Bouslimi, Dalel; Bayat, Sahar; Riou, Christine; Busnel, Yann; Le Guillou, Clara; Cauvin, Jean-Michel; Jacquelinet, Christian; Pladys, Patrick; Oger, Emmanuel; Stindel, Eric; Ingrand, Pierre; Coatrieux, Gouenou; Cuggia, Marc
2017-01-01
Sharing and exploiting Health Big Data (HBD) allow tackling challenges: data protection/governance taking into account legal, ethical, and deontological aspects enables trust, transparent and win-win relationship between researchers, citizens, and data providers. Lack of interoperability: compartmentalized and syntactically/semantica heterogeneous data. INSHARE project using experimental proof of concept explores how recent technologies overcome such issues. Using 6 data providers, platform is designed via 3 steps to: (1) analyze use cases, needs, and requirements; (2) define data sharing governance, secure access to platform; and (3) define platform specifications. Three use cases - from 5 studies and 11 data sources - were analyzed for platform design. Governance derived from SCANNER model was adapted to data sharing. Platform architecture integrates: data repository and hosting, semantic integration services, data processing, aggregate computing, data quality and integrity monitoring, Id linking, multisource query builder, visualization and data export services, data governance, study management service and security including data watermarking.
A research on the positioning technology of vehicle navigation system from single source to "ASPN"
NASA Astrophysics Data System (ADS)
Zhang, Jing; Li, Haizhou; Chen, Yu; Chen, Hongyue; Sun, Qian
2017-10-01
Due to the suddenness and complexity of modern warfare, land-based weapon systems need to have precision strike capability on roads and railways. The vehicle navigation system is one of the most important equipments for the land-based weapon systems that have precision strick capability. There are inherent shortcomings for single source navigation systems to provide continuous and stable navigation information. To overcome the shortcomings, the multi-source positioning technology is developed. The All Source Positioning and Navigaiton (ASPN) program was proposed in 2010, which seeks to enable low cost, robust, and seamless navigation solutions for military to use on any operational platform and in any environment with or without GPS. The development trend of vehicle positioning technology was reviewed in this paper. The trend indicates that the positioning technology is developed from single source and multi-source to ASPN. The data fusion techniques based on multi-source and ASPN was analyzed in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gastelum, Zoe N.; White, Amanda M.; Whitney, Paul D.
2013-06-04
The Multi-Source Signatures for Nuclear Programs project, part of Pacific Northwest National Laboratory’s (PNNL) Signature Discovery Initiative, seeks to computationally capture expert assessment of multi-type information such as text, sensor output, imagery, or audio/video files, to assess nuclear activities through a series of Bayesian network (BN) models. These models incorporate knowledge from a diverse range of information sources in order to help assess a country’s nuclear activities. The models span engineering topic areas, state-level indicators, and facility-specific characteristics. To illustrate the development, calibration, and use of BN models for multi-source assessment, we present a model that predicts a country’s likelihoodmore » to participate in the international nuclear nonproliferation regime. We validate this model by examining the extent to which the model assists non-experts arrive at conclusions similar to those provided by nuclear proliferation experts. We also describe the PNNL-developed software used throughout the lifecycle of the Bayesian network model development.« less
Variable cycle control model for intersection based on multi-source information
NASA Astrophysics Data System (ADS)
Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan
2018-05-01
In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.
Processing multisource feedback during residency under the guidance of a non-medical coach
Eckenhausen, Marina A.W.; ten Cate, Olle
2018-01-01
Objectives The present study aimed to investigate residents’ preferences in dealing with personal multi-source feedback (MSF) reports with or without the support of a coach. Methods Residents employed for at least half a year in the study hospital were eligible to participate. All 43 residents opting to discuss their MSF report with a psychologist-coach before discussing results with the program director were included. Semi-structured interviews were conducted following individual coaching sessions. Qualitative and quantitative data were gathered using field notes. Results Seventy-four percent (n= 32) preferred sharing the MFS report always with a coach, 21% (n= 9) if either the feedback or the relationship with the program director was less favorable, and 5% (n=2) saw no difference between discussing with a coach or with the program director. In the final stage of training residents more often preferred the coach (82.6%, n=19) than in the first stages (65%, n=13). Reasons for discussing the report with a coach included her neutral and objective position, her expertise, and the open and safe context during the discussion. Conclusions Most residents preferred discussing multisource feedback results with a coach before their meeting with a program director, particularly if the results were negative. They appeared to struggle with the dual role of the program director (coaching and judging) and appreciated the expertise of a dedicated coach to navigate this confrontation. We encourage residency programs to consider offering residents neutral coaching when processing multisource feedback. PMID:29478041
Multisource Feedback in the Ambulatory Setting
Warm, Eric J.; Schauer, Daniel; Revis, Brian; Boex, James R.
2010-01-01
Background The Accreditation Council for Graduate Medical Education has mandated multisource feedback (MSF) in the ambulatory setting for internal medicine residents. Few published reports demonstrate actual MSF results for a residency class, and fewer still include clinical quality measures and knowledge-based testing performance in the data set. Methods Residents participating in a year-long group practice experience called the “long-block” received MSF that included self, peer, staff, attending physician, and patient evaluations, as well as concomitant clinical quality data and knowledge-based testing scores. Residents were given a rank for each data point compared with peers in the class, and these data were reviewed with the chief resident and program director over the course of the long-block. Results Multisource feedback identified residents who performed well on most measures compared with their peers (10%), residents who performed poorly on most measures compared with their peers (10%), and residents who performed well on some measures and poorly on others (80%). Each high-, intermediate-, and low-performing resident had a least one aspect of the MSF that was significantly lower than the other, and this served as the basis of formative feedback during the long-block. Conclusion Use of multi-source feedback in the ambulatory setting can identify high-, intermediate-, and low-performing residents and suggest specific formative feedback for each. More research needs to be done on the effect of such feedback, as well as the relationships between each of the components in the MSF data set. PMID:21975632
NASA Astrophysics Data System (ADS)
Vieira, João; da Conceição Cunha, Maria
2017-04-01
A multi-objective decision model has been developed to identify the Pareto-optimal set of management alternatives for the conjunctive use of surface water and groundwater of a multisource urban water supply system. A multi-objective evolutionary algorithm, Borg MOEA, is used to solve the multi-objective decision model. The multiple solutions can be shown to stakeholders allowing them to choose their own solutions depending on their preferences. The multisource urban water supply system studied here is dependent on surface water and groundwater and located in the Algarve region, southernmost province of Portugal, with a typical warm Mediterranean climate. The rainfall is low, intermittent and concentrated in a short winter, followed by a long and dry period. A base population of 450 000 inhabitants and visits by more than 13 million tourists per year, mostly in summertime, turns water management critical and challenging. Previous studies on single objective optimization after aggregating multiple objectives together have already concluded that only an integrated and interannual water resources management perspective can be efficient for water resource allocation in this drought prone region. A simulation model of the multisource urban water supply system using mathematical functions to represent the water balance in the surface reservoirs, the groundwater flow in the aquifers, and the water transport in the distribution network with explicit representation of water quality is coupled with Borg MOEA. The multi-objective problem formulation includes five objectives. Two objective evaluate separately the water quantity and the water quality supplied for the urban use in a finite time horizon, one objective calculates the operating costs, and two objectives appraise the state of the two water sources - the storage in the surface reservoir and the piezometric levels in aquifer - at the end of the time horizon. The decision variables are the volume of withdrawals from each water source in each time step (i.e., reservoir diversion and groundwater pumping). The results provide valuable information for analysing the impacts of the conjunctive use of surface water and groundwater. For example, considering a drought scenario, the results show how the same level of total water supplied can be achieved by different management alternatives with different impact on the water quality, costs, and the state of the water sources at the end of the time horizon. The results allow also the clear understanding of the potential benefits from the conjunctive use of surface water and groundwater thorough the mitigation of the variation in the availability of surface water, improving the water quantity and/or water quality delivered to the users, or the better adaptation of such systems to a changing world.
Multi-Source Sensor Fusion for Small Unmanned Aircraft Systems Using Fuzzy Logic
NASA Technical Reports Server (NTRS)
Cook, Brandon; Cohen, Kelly
2017-01-01
As the applications for using small Unmanned Aircraft Systems (sUAS) beyond visual line of sight (BVLOS) continue to grow in the coming years, it is imperative that intelligent sensor fusion techniques be explored. In BVLOS scenarios the vehicle position must accurately be tracked over time to ensure no two vehicles collide with one another, no vehicle crashes into surrounding structures, and to identify off-nominal scenarios. Therefore, in this study an intelligent systems approach is used to estimate the position of sUAS given a variety of sensor platforms, including, GPS, radar, and on-board detection hardware. Common research challenges include, asynchronous sensor rates and sensor reliability. In an effort to realize these challenges, techniques such as a Maximum a Posteriori estimation and a Fuzzy Logic based sensor confidence determination are used.
SU-C-207-01: Four-Dimensional Inverse Geometry Computed Tomography: Concept and Its Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K; Kim, D; Kim, T
2015-06-15
Purpose: In past few years, the inverse geometry computed tomography (IGCT) system has been developed to overcome shortcomings of a conventional computed tomography (CT) system such as scatter problem induced from large detector size and cone-beam artifact. In this study, we intend to present a concept of a four-dimensional (4D) IGCT system that has positive aspects above all with temporal resolution for dynamic studies and reduction of motion artifact. Methods: Contrary to conventional CT system, projection data at a certain angle in IGCT was a group of fractionated narrow cone-beam projection data, projection group (PG), acquired from multi-source array whichmore » have extremely short time gap of sequential operation between each of sources. At this, for 4D IGCT imaging, time-related data acquisition parameters were determined by combining multi-source scanning time for collecting one PG with conventional 4D CBCT data acquisition sequence. Over a gantry rotation, acquired PGs from multi-source array were tagged time and angle for 4D image reconstruction. Acquired PGs were sorted into 10 phase and image reconstructions were independently performed at each phase. Image reconstruction algorithm based upon filtered-backprojection was used in this study. Results: The 4D IGCT had uniform image without cone-beam artifact on the contrary to 4D CBCT image. In addition, the 4D IGCT images of each phase had no significant artifact induced from motion compared with 3D CT. Conclusion: The 4D IGCT image seems to give relatively accurate dynamic information of patient anatomy based on the results were more endurable than 3D CT about motion artifact. From this, it will be useful for dynamic study and respiratory-correlated radiation therapy. This work was supported by the Industrial R&D program of MOTIE/KEIT [10048997, Development of the core technology for integrated therapy devices based on real-time MRI guided tumor tracking] and the Mid-career Researcher Program (2014R1A2A1A10050270) through the National Research Foundation of Korea funded by the Ministry of Science, ICT&Future Planning.« less
Three-dimensional inversion of multisource array electromagnetic data
NASA Astrophysics Data System (ADS)
Tartaras, Efthimios
Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.
The Real-Time Monitoring Service Platform for Land Supervision Based on Cloud Integration
NASA Astrophysics Data System (ADS)
Sun, J.; Mao, M.; Xiang, H.; Wang, G.; Liang, Y.
2018-04-01
Remote sensing monitoring has become the important means for land and resources departments to strengthen supervision. Aiming at the problems of low monitoring frequency and poor data currency in current remote sensing monitoring, this paper researched and developed the cloud-integrated real-time monitoring service platform for land supervision which enhanced the monitoring frequency by acquiring the domestic satellite image data overall and accelerated the remote sensing image data processing efficiency by exploiting the intelligent dynamic processing technology of multi-source images. Through the pilot application in Jinan Bureau of State Land Supervision, it has been proved that the real-time monitoring technical method for land supervision is feasible. In addition, the functions of real-time monitoring and early warning are carried out on illegal land use, permanent basic farmland protection and boundary breakthrough in urban development. The application has achieved remarkable results.
Li, Wen-Jie; Zhang, Shi-Huang; Wang, Hui-Min
2011-12-01
Ecosystem services evaluation is a hot topic in current ecosystem management, and has a close link with human beings welfare. This paper summarized the research progress on the evaluation of ecosystem services based on geographic information system (GIS) and remote sensing (RS) technology, which could be reduced to the following three characters, i. e., ecological economics theory is widely applied as a key method in quantifying ecosystem services, GIS and RS technology play a key role in multi-source data acquisition, spatiotemporal analysis, and integrated platform, and ecosystem mechanism model becomes a powerful tool for understanding the relationships between natural phenomena and human activities. Aiming at the present research status and its inadequacies, this paper put forward an "Assembly Line" framework, which was a distributed one with scalable characteristics, and discussed the future development trend of the integration research on ecosystem services evaluation based on GIS and RS technologies.
NASA Astrophysics Data System (ADS)
Ren, W.; Huang, Y.; Tao, B.; Zhu, X.; Tian, H.
2017-12-01
The agriculture sector is estimated to be responsible for 12% of the total greenhouse gas emissions, particularly for 52% of CH4 and 84% of N2O. It has been predicted that the world population would reach 9.7 billion by 2050 and require a 60 percent increase in total agricultural production above the level of 2005-07, which would potentially further boost greenhouse gas emissions from agroecosystems. The growing concerns over food security and rapid rate of global warming necessitates the development of conservation management (or climate-smart soil management) that can ensure high crop yield and meanwhile markedly enhance soil sequestration and reduce GHG emissions. In this study, we synthesize multi-source datasets and apply an improved agroecosystem model to quantitatively investigate the dynamics of CH4 and N2O fluxes as influenced by conservation management practices in cropping systems of Asia (such as wheat, corn, and rice) for exploring the potential of those practices to mitigate and adapt to climate change. Our preliminary results suggest that the conservation tillage (e.g., reduced and no tillage) can largely suppress CH4 emissions from Asia's rice paddies, although they, to some extent, may stimulate NO2 emissions, comparing with the conventional tillage.
Bezrukova, Katerina; Spell, Chester S; Caldwell, David; Burger, Jerry M
2016-01-01
Integrating the literature on faultlines, conflict, and pay, we drew on the basic principles of multilevel theory and differentiated between group- and organizational-level faultlines to introduce a novel multilevel perspective on faultlines. Using multisource, multilevel data on 30 Major League Baseball (MLB) teams, we found that group-level faultlines were negatively associated with group performance, and that internally focused conflict exacerbated but externally focused conflict mitigated this effect. Organizational-level faultlines were negatively related to organizational performance, and were most harmful in organizations with high levels of compensation. Implications for groups and teams in the sports/entertainment and other industries are discussed. (c) 2016 APA, all rights reserved).
Cervo, Silvia; Rovina, Jane; Talamini, Renato; Perin, Tiziana; Canzonieri, Vincenzo; De Paoli, Paolo; Steffan, Agostino
2013-07-30
Efforts to improve patients' understanding of their own medical treatments or research in which they are involved are progressing, especially with regard to informed consent procedures. We aimed to design a multisource informed consent procedure that is easily adaptable to both clinical and research applications, and to evaluate its effectiveness in terms of understanding and awareness, even in less educated patients. We designed a multisource informed consent procedure for patients' enrolment in a Cancer Institute Biobank (CRO-Biobank). From October 2009 to July 2011, a total of 550 cancer patients admitted to the Centro di Riferimento Oncologico IRCCS Aviano, who agreed to contribute to its biobank, were consecutively enrolled. Participants were asked to answer a self-administered questionnaire aim at exploring their understanding of biobanks and their needs for information on this topic, before and after study participation. Chi-square tests were performed on the questionnaire answers, according to gender or education. Of the 430 patients who returned the questionnaire, only 36.5% knew what a biobank was before participating in the study. Patients with less formal education were less informed by some sources (the Internet, newspapers, magazines, and our Institute). The final assessment test, taken after the multisource informed consent procedure, showed more than 95% correct answers. The information received was judged to be very or fairly understandable in almost all cases. More than 95% of patients were aware of participating in a biobank project, and gave helping cancer research (67.5%), moral obligation, and supporting cancer care as main reasons for their involvement. Our multisource informed consent information system allowed a high rate of understanding and awareness of study participation, even among less-educated participants, and could be an effective and easy-to-apply model for others to consider to contribute to a well-informed decision making process in several fields, from clinical practice to research.Further studies are needed to explore the effects on the study comprehension by each source of information, and by other sources suggested by participants in the questionnaire.
Bi-level Multi-Source Learning for Heterogeneous Block-wise Missing Data
Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M.; Ye, Jieping
2013-01-01
Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified “bi-level” learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. PMID:23988272
Bi-level multi-source learning for heterogeneous block-wise missing data.
Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M; Ye, Jieping
2014-11-15
Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer's Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified "bi-level" learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. © 2013 Elsevier Inc. All rights reserved.
Integrating multisource imagery and GIS analysis for mapping Bermuda`s benthic habitats
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vierros, M.K.
1997-06-01
Bermuda is a group of isolated oceanic situated in the northwest Atlantic Ocean and surrounded by the Sargasso Sea. Bermuda possesses the northernmost coral reefs and mangroves in the Atlantic Ocean, and because of its high population density, both the terrestrial and marine environments are under intense human pressure. Although a long record of scientific research exists, this study is the first attempt to comprehensively map the area`s benthic habitats, despite the need for such a map for resource assessment and management purposes. Multi-source and multi-date imagery were used for producing the habitat map due to lack of a completemore » up-to-date image. Classifications were performed with SPOT data, and the results verified from recent aerial photography and current aerial video, along with extensive ground truthing. Stratification of the image into regions prior to classification reduced the confusing effects of varying water depth. Classification accuracy in shallow areas was increased by derivation of a texture pseudo-channel, while bathymetry was used as a classification tool in deeper areas, where local patterns of zonation were well known. Because of seasonal variation in extent of seagrasses, a classification scheme based on density could not be used. Instead, a set of classes based on the seagrass area`s exposure to the open ocean were developed. The resulting habitat map is currently being assessed for accuracy with promising preliminary results, indicating its usefulness as a basis for future resource assessment studies.« less
Detecting misinformation and knowledge conflicts in relational data
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Jackobsen, Matthew; Riordan, Brian
2014-06-01
Information fusion is required for many mission-critical intelligence analysis tasks. Using knowledge extracted from various sources, including entities, relations, and events, intelligence analysts respond to commander's information requests, integrate facts into summaries about current situations, augment existing knowledge with inferred information, make predictions about the future, and develop action plans. However, information fusion solutions often fail because of conflicting and redundant knowledge contained in multiple sources. Most knowledge conflicts in the past were due to translation errors and reporter bias, and thus could be managed. Current and future intelligence analysis, especially in denied areas, must deal with open source data processing, where there is much greater presence of intentional misinformation. In this paper, we describe a model for detecting conflicts in multi-source textual knowledge. Our model is based on constructing semantic graphs representing patterns of multi-source knowledge conflicts and anomalies, and detecting these conflicts by matching pattern graphs against the data graph constructed using soft co-reference between entities and events in multiple sources. The conflict detection process maintains the uncertainty throughout all phases, providing full traceability and enabling incremental updates of the detection results as new knowledge or modification to previously analyzed information are obtained. Detected conflicts are presented to analysts for further investigation. In the experimental study with SYNCOIN dataset, our algorithms achieved perfect conflict detection in ideal situation (no missing data) while producing 82% recall and 90% precision in realistic noise situation (15% of missing attributes).
Visual Analytics of integrated Data Systems for Space Weather Purposes
NASA Astrophysics Data System (ADS)
Rosa, Reinaldo; Veronese, Thalita; Giovani, Paulo
Analysis of information from multiple data sources obtained through high resolution instrumental measurements has become a fundamental task in all scientific areas. The development of expert methods able to treat such multi-source data systems, with both large variability and measurement extension, is a key for studying complex scientific phenomena, especially those related to systemic analysis in space and environmental sciences. In this talk, we present a time series generalization introducing the concept of generalized numerical lattice, which represents a discrete sequence of temporal measures for a given variable. In this novel representation approach each generalized numerical lattice brings post-analytical data information. We define a generalized numerical lattice as a set of three parameters representing the following data properties: dimensionality, size and post-analytical measure (e.g., the autocorrelation, Hurst exponent, etc)[1]. From this representation generalization, any multi-source database can be reduced to a closed set of classified time series in spatiotemporal generalized dimensions. As a case study, we show a preliminary application in space science data, highlighting the possibility of a real time analysis expert system. In this particular application, we have selected and analyzed, using detrended fluctuation analysis (DFA), several decimetric solar bursts associated to X flare-classes. The association with geomagnetic activity is also reported. DFA method is performed in the framework of a radio burst automatic monitoring system. Our results may characterize the variability pattern evolution, computing the DFA scaling exponent, scanning the time series by a short windowing before the extreme event [2]. For the first time, the application of systematic fluctuation analysis for space weather purposes is presented. The prototype for visual analytics is implemented in a Compute Unified Device Architecture (CUDA) by using the K20 Nvidia graphics processing units (GPUs) to reduce the integrated analysis runtime. [1] Veronese et al. doi: 10.6062/jcis.2009.01.02.0021, 2010. [2] Veronese et al. doi:http://dx.doi.org/10.1016/j.jastp.2010.09.030, 2011.
Evaluating the potential of improving residential water balance at building scale.
Agudelo-Vera, Claudia M; Keesman, Karel J; Mels, Adriaan R; Rijnaarts, Huub H M
2013-12-15
Earlier results indicated that, for an average household, self-sufficiency in water supply can be achieved by following the Urban harvest Approach (UHA), in a combination of demand minimization, cascading and multi-sourcing. To achieve these results, it was assumed that all available local resources can be harvested. In reality, however, temporal, spatial and location-bound factors pose limitations to this harvest and, thus, to self-sufficiency. This article investigates potential spatial and temporal limitations to harvest local water resources at building level for the Netherlands, with a focus on indoor demand. Two building types were studied, a free standing house (one four-people household) and a mid-rise apartment flat (28 two-person households). To be able to model yearly water balances, daily patterns considering household occupancy and presence of water using appliances were defined per building type. Three strategies were defined. The strategies include demand minimization, light grey water (LGW) recycling, and rainwater harvesting (multi-sourcing). Recycling and multi-sourcing cater for toilet flushing and laundry machine. Results showed that water saving devices may reduce 30% of the conventional demand. Recycling of LGW can supply 100% of second quality water (DQ2) which represents 36% of the conventional demand or up to 20% of the minimized demand. Rainwater harvesting may supply approximately 80% of the minimized demand in case of the apartment flat and 60% in case of the free standing house. To harvest these potentials, different system specifications, related to the household type, are required. Two constraints to recycle and multi-source were identified, namely i) limitations in the grey water production and available rainfall; and ii) the potential to harvest water as determined by the temporal pattern in water availability, water use, and storage and treatment capacities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Manzo, C; Mei, A; Zampetti, E; Bassani, C; Paciucci, L; Manetti, P
2017-04-15
This paper describes a methodology to perform chemical analyses in landfill areas by integrating multisource geomatic data. We used a top-down approach to identify Environmental Point of Interest (EPI) based on very high-resolution satellite data (Pleiades and WorldView 2) and on in situ thermal and photogrammetric surveys. Change detection techniques and geostatistical analysis supported the chemical survey, undertaken using an accumulation chamber and an RIIA, an unmanned ground vehicle developed by CNR IIA, equipped with a multiparameter sensor platform for environmental monitoring. Such an approach improves site characterization, identifying the key environmental points of interest where it is necessary to perform detailed chemical analyses. Copyright © 2017 Elsevier B.V. All rights reserved.
A Multi-Scale Settlement Matching Algorithm Based on ARG
NASA Astrophysics Data System (ADS)
Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia
2016-06-01
Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Lynn; Rountree, Kelley; Mills, Karmann
This report discusses the use of accelerated stress testing (AST) to provide insights into the long-term behavior of commercial products utilizing different types of mid-power LEDs (MP-LEDs) integrated into the same LED module. Test results are presented from two commercial lamps intended for use in horticulture applications and one tunable-white LED module intended for use in educational and office lighting applications. Each of these products is designed to provide a custom spectrum for their targeted applications and each achieves this goal in different ways. Consequently, a comparison of the long-term stability of these devices will provide insights regarding approaches thatmore » could be used to possibly lengthen the lifetime of SSL products.« less
Teams as innovative systems: multilevel motivational antecedents of innovation in R&D teams.
Chen, Gilad; Farh, Jiing-Lih; Campbell-Bush, Elizabeth M; Wu, Zhiming; Wu, Xin
2013-11-01
Integrating theories of proactive motivation, team innovation climate, and motivation in teams, we developed and tested a multilevel model of motivators of innovative performance in teams. Analyses of multisource data from 428 members of 95 research and development (R&D) teams across 33 Chinese firms indicated that team-level support for innovation climate captured motivational mechanisms that mediated between transformational leadership and team innovative performance, whereas members' motivational states (role-breadth self-efficacy and intrinsic motivation) mediated between proactive personality and individual innovative performance. Furthermore, individual motivational states and team support for innovation climate uniquely promoted individual innovative performance, and, in turn, individual innovative performance linked team support for innovation climate to team innovative performance. (c) 2013 APA, all rights reserved.
Surveillance for work-related skull fractures in Michigan.
Kica, Joanna; Rosenman, Kenneth D
2014-12-01
The objective was to develop a multisource surveillance system for work-related skull fractures. Records on work-related skull fractures were obtained from Michigan's 134 hospitals, Michigan's Workers' Compensation Agency and death certificates. Cases from the three sources were matched to eliminate duplicates from more than one source. Workplaces where the most severe injuries occurred were referred to OSHA for an enforcement inspection. There were 318 work related skull fractures, not including facial fractures, between 2010 and 2012. In 2012, after the inclusion of facial fractures, 316 fractures were identified of which 218 (69%) were facial fractures. The Bureau of Labor Statistic's (BLS) 2012 estimate of skull fractures in Michigan, which includes facial fractures, was 170, which was 53.8% of those identified from our review of medical records. The inclusion of facial fractures in the surveillance system increased the percentage of women identified from 15.4% to 31.2%, decreased severity (hospitalization went from 48.7% to 10.6% and loss of consciousness went from 56.5% to 17.8%), decreased falls from 48.2% to 27.6%, and increased assaults from 5.0% to 20.2%, shifted the most common industry from construction (13.3%) to health care and social assistance (15.0%) and the highest incidence rate from males 65+ (6.8 per 100,000) to young men, 20-24 years (9.6 per 100,000). Workplace inspections resulted in 45 violations and $62,750 in penalties. The Michigan multisource surveillance system of workplace injuries had two major advantages over the existing national system: (a) workplace investigations were initiated hazards identified and safety changes implemented at the facilities where the injuries occurred; and (b) a more accurate count was derived, with 86% more work-related skull fractures identified than BLS's employer based estimate. A more comprehensive system to identify and target interventions for workplace injuries was implemented using hospital and emergency department medical records. Copyright © 2014 National Safety Council and Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Qiang; Xu, Qian; Zhang, Yijun; Yang, Yinghui; Yong, Qi; Liu, Guoxiang; Liu, Xianwen
2018-03-01
Single satellite geodetic technique has weakness for mapping sequence of ground deformation associated with serial seismic events, like InSAR with long revisiting period readily leading to mixed complex deformation signals from multiple events. It challenges the observation capability of single satellite geodetic technique for accurate recognition of individual surface deformation and earthquake model. The rapidly increasing availability of various satellite observations provides good solution for overcoming the issue. In this study, we explore a sequential combination of multiple overlapping datasets from ALOS/PALSAR, ENVISAT/ASAR and GPS observations to separate surface deformation associated with the 2011 Mw 9.0 Tohoku-Oki major quake and two strong aftershocks including the Mw 6.6 Iwaki and Mw 5.8 Ibaraki events. We first estimate the fault slip model of major shock with ASAR interferometry and GPS displacements as constraints. Due to the used PALSAR interferogram spanning the period of all the events, we then remove the surface deformation of major shock through forward calculated prediction thus obtaining PALSAR InSAR deformation associated with the two strong aftershocks. The inversion for source parameters of Iwaki aftershock is conducted using the refined PALSAR deformation considering that the higher magnitude Iwaki quake has dominant deformation contribution than the Ibaraki event. After removal of deformation component of Iwaki event, we determine the fault slip distribution of Ibaraki shock using the remained PALSAR InSAR deformation. Finally, the complete source models for the serial seismic events are clearly identified from the sequential combination of multi-source satellite observations, which suggest that the major quake is a predominant mega-thrust rupture, whereas the two aftershocks are normal faulting motion. The estimated seismic moment magnitude for the Tohoku-Oki, Iwaki and Ibaraki evens are Mw 9.0, Mw 6.85 and Mw 6.11, respectively.
Towards Device-Independent Information Processing on General Quantum Networks
NASA Astrophysics Data System (ADS)
Lee, Ciarán M.; Hoban, Matty J.
2018-01-01
The violation of certain Bell inequalities allows for device-independent information processing secure against nonsignaling eavesdroppers. However, this only holds for the Bell network, in which two or more agents perform local measurements on a single shared source of entanglement. To overcome the practical constraints that entangled systems can only be transmitted over relatively short distances, large-scale multisource networks have been employed. Do there exist analogs of Bell inequalities for such networks, whose violation is a resource for device independence? In this Letter, the violation of recently derived polynomial Bell inequalities will be shown to allow for device independence on multisource networks, secure against nonsignaling eavesdroppers.
Towards Device-Independent Information Processing on General Quantum Networks.
Lee, Ciarán M; Hoban, Matty J
2018-01-12
The violation of certain Bell inequalities allows for device-independent information processing secure against nonsignaling eavesdroppers. However, this only holds for the Bell network, in which two or more agents perform local measurements on a single shared source of entanglement. To overcome the practical constraints that entangled systems can only be transmitted over relatively short distances, large-scale multisource networks have been employed. Do there exist analogs of Bell inequalities for such networks, whose violation is a resource for device independence? In this Letter, the violation of recently derived polynomial Bell inequalities will be shown to allow for device independence on multisource networks, secure against nonsignaling eavesdroppers.
Wu, Peng; Huang, Yiyin; Kang, Longtian; Wu, Maoxiang; Wang, Yaobing
2015-01-01
A series of palladium-based catalysts of metal alloying (Sn, Pb) and/or (N-doped) graphene support with regular enhanced electrocatalytic activity were investigated. The peak current density (118.05 mA cm−2) of PdSn/NG is higher than the sum current density (45.63 + 47.59 mA cm−2) of Pd/NG and PdSn/G. It reveals a synergistic electrocatalytic oxidation effect in PdSn/N-doped graphene Nanocomposite. Extend experiments show this multisource synergetic catalytic effect of metal alloying and N-doped graphene support in one catalyst on small organic molecule (methanol, ethanol and Ethylene glycol) oxidation is universal in PdM(M = Sn, Pb)/NG catalysts. Further, The high dispersion of small nanoparticles, the altered electron structure and Pd(0)/Pd(II) ratio of Pd in catalysts induced by strong coupled the metal alloying and N-doped graphene are responsible for the multisource synergistic catalytic effect in PdM(M = Sn, Pb) /NG catalysts. Finally, the catalytic durability and stability are also greatly improved. PMID:26434949
Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline.
Zhang, Jie; Li, Qingyang; Caselli, Richard J; Thompson, Paul M; Ye, Jieping; Wang, Yalin
2017-06-01
Alzheimer's Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms.
Perualila-Tan, Nolen Joy; Shkedy, Ziv; Talloen, Willem; Göhlmann, Hinrich W H; Moerbeke, Marijke Van; Kasim, Adetayo
2016-08-01
The modern process of discovering candidate molecules in early drug discovery phase includes a wide range of approaches to extract vital information from the intersection of biology and chemistry. A typical strategy in compound selection involves compound clustering based on chemical similarity to obtain representative chemically diverse compounds (not incorporating potency information). In this paper, we propose an integrative clustering approach that makes use of both biological (compound efficacy) and chemical (structural features) data sources for the purpose of discovering a subset of compounds with aligned structural and biological properties. The datasets are integrated at the similarity level by assigning complementary weights to produce a weighted similarity matrix, serving as a generic input in any clustering algorithm. This new analysis work flow is semi-supervised method since, after the determination of clusters, a secondary analysis is performed wherein it finds differentially expressed genes associated to the derived integrated cluster(s) to further explain the compound-induced biological effects inside the cell. In this paper, datasets from two drug development oncology projects are used to illustrate the usefulness of the weighted similarity-based clustering approach to integrate multi-source high-dimensional information to aid drug discovery. Compounds that are structurally and biologically similar to the reference compounds are discovered using this proposed integrative approach.
Progressive simplification and transmission of building polygons based on triangle meshes
NASA Astrophysics Data System (ADS)
Li, Hongsheng; Wang, Yingjie; Guo, Qingsheng; Han, Jiafu
2010-11-01
Digital earth is a virtual representation of our planet and a data integration platform which aims at harnessing multisource, multi-resolution, multi-format spatial data. This paper introduces a research framework integrating progressive cartographic generalization and transmission of vector data. The progressive cartographic generalization provides multiple resolution data from coarse to fine as key scales and increments between them which is not available in traditional generalization framework. Based on the progressive simplification algorithm, the building polygons are triangulated into meshes and encoded according to the simplification sequence of two basic operations, edge collapse and vertex split. The map data at key scales and encoded increments between them are stored in a multi-resolution file. As the client submits requests to the server, the coarsest map is transmitted first and then the increments. After data decoding and mesh refinement the building polygons with more details will be visualized. Progressive generalization and transmission of building polygons is demonstrated in the paper.
NASA Astrophysics Data System (ADS)
Gao, Tian; Zhu, Jiaojun; Deng, Songqiu; Zheng, Xiao; Zhang, Jinxin; Shang, Guiduo; Huang, Liyan
2016-10-01
Timber production is the purpose for managing plantation forests, and its spatial and quantitative information is critical for advising management strategies. Previous studies have focused on growing stock volume (GSV), which represents the current potential of timber production, yet few studies have investigated historical process-harvested timber. This resulted in a gap in a synthetical ecosystem service assessment of timber production. In this paper, we established a Management Process-based Timber production (MPT) framework to integrate the current GSV and the harvested timber derived from historical logging regimes, trying to synthetically assess timber production for a historical period. In the MPT framework, age-class and current GSV determine the times of historical thinning and the corresponding harvested timber, by using a ;space-for-time; substitution. The total timber production can be estimated by the historical harvested timber in each thinning and the current GSV. To test this MPT framework, an empirical study on a larch plantation (LP) with area of 43,946 ha was conducted in North China for a period from 1962 to 2010. Field-based inventory data was integrated with ALOS PALSAR (Advanced Land-Observing Satellite Phased Array L-band Synthetic Aperture Radar) and Landsat-8 OLI (Operational Land Imager) data for estimating the age-class and current GSV of LP. The random forest model with PALSAR backscatter intensity channels and OLI bands as input predictive variables yielded an accuracy of 67.9% with a Kappa coefficient of 0.59 for age-class classification. The regression model using PALSAR data produced a root mean square error (RMSE) of 36.5 m3 ha-1. The total timber production of LP was estimated to be 7.27 × 106 m3, with 4.87 × 106 m3 in current GSV and 2.40 × 106 m3 in harvested timber through historical thinning. The historical process-harvested timber accounts to 33.0% of the total timber production, which component has been neglected in the assessments for current status of plantation forests. Synthetically considering the RMSE for predictive GSV and misclassification of age-class, the error in timber production were supposed to range from -55.2 to 56.3 m3 ha-1. The MPT framework can be used to assess timber production of other tree species at a larger spatial scale, providing crucial information for a better understanding of forest ecosystem service.
Jouhet, V; Defossez, G; Ingrand, P
2013-01-01
The aim of this study was to develop and evaluate a selection algorithm of relevant records for the notification of incident cases of cancer on the basis of the individual data available in a multi-source information system. This work was conducted on data for the year 2008 in the general cancer registry of Poitou-Charentes region (France). The selection algorithm hierarchizes information according to its level of relevance for tumoral topography and tumoral morphology independently. The selected data are combined to form composite records. These records are then grouped in respect with the notification rules of the International Agency for Research on Cancer for multiple primary cancers. The evaluation, based on recall, precision and F-measure confronted cases validated manually by the registry's physicians with tumours notified with and without records selection. The analysis involved 12,346 tumours validated among 11,971 individuals. The data used were hospital discharge data (104,474 records), pathology data (21,851 records), healthcare insurance data (7508 records) and cancer care centre's data (686 records). The selection algorithm permitted performances improvement for notification of tumour topography (F-measure 0.926 with vs. 0.857 without selection) and tumour morphology (F-measure 0.805 with vs. 0.750 without selection). These results show that selection of information according to its origin is efficient in reducing noise generated by imprecise coding. Further research is needed for solving the semantic problems relating to the integration of heterogeneous data and the use of non-structured information.
Satellite radiothermovision of atmospheric mesoscale processes: case study of tropical cyclones
NASA Astrophysics Data System (ADS)
Ermakov, D. M.; Sharkov, E. A.; Chernushich, A. P.
2015-04-01
Satellite radiothermovision is a set of processing techniques applicable for multisource data of radiothermal monitoring of oceanatmosphere system, which allows creating dynamic description of mesoscale and synoptic atmospheric processes and estimating physically meaningful integral characteristics of the observed processes (like avdective flow of the latent heat through a given border). The approach is based on spatiotemporal interpolation of the satellite measurements which allows reconstructing the radiothermal fields (as well as the fields of geophysical parameters) of the ocean-atmosphere system at global scale with spatial resolution of about 0.125° and temporal resolution of 1.5 hour. The accuracy of spatiotemporal interpolation was estimated by direct comparison of interpolated data with the data of independent asynchronous measurements and was shown to correspond to the best achievable as reported in literature (for total precipitable water fields the accuracy is about 0.8 mm). The advantages of the implemented interpolation scheme are: closure under input radiothermal data, homogeneity in time scale (all data are interpolated through the same time intervals), automatic estimation of both the intermediate states of scalar field of the studied geophysical parameter and of vector field of effective velocity of advection (horizontal movements). Using this pair of fields one can calculate the flow of a given geophysical quantity though any given border. For example, in case of total precipitable water field, this flow (under proper calibration) has the meaning of latent heat advective flux. This opportunity was used to evaluate the latent heat flux though a set of circular contours, enclosing a tropical cyclone and drifting with it during its evolution. A remarkable interrelation was observed between the calculated magnitude and sign of advective latent flux and the intensity of a tropical cyclone. This interrelation is demonstrated in several examples of hurricanes and tropical cyclones of August, 2000, and typhoons of November, 2013, including super typhoon Haiyan.
Al Ansari, Ahmed; Al Khalifa, Khalid; Al Azzawi, Mohamed; Al Amer, Rashed; Al Sharqi, Dana; Al-Mansoor, Anwar; Munshi, Fadi M
2015-01-01
We aimed to design, implement, and evaluate the feasibility and reliability of a multisource feedback (MSF) system to assess interns in their clerkship year in the Middle Eastern culture, the Kingdom of Bahrain. The study was undertaken in the Bahrain Defense Force Hospital, a military teaching hospital in the Kingdom of Bahrain. A total of 21 interns (who represent the total population of the interns for the given year) were assessed in this study. All of the interns were rotating through our hospital during their year-long clerkship rotation. The study sample consisted of nine males and 12 females. Each participating intern was evaluated by three groups of raters, eight medical intern colleagues, eight senior medical colleagues, and eight coworkers from different departments. A total of 21 interns (nine males and 12 females) were assessed in this study. The total mean response rates were 62.3%. A factor analysis was conducted that found that the data on the questionnaire grouped into three factors that counted for 76.4% of the total variance. These three factors were labeled as professionalism, collaboration, and communication. Reliability analysis indicated that the full instrument scale had high internal consistency (Cronbach's α 0.98). The generalizability coefficients for the surveys were estimated to be 0.78. Based on our results and analysis, we conclude that the MSF tool we used on the interns rotating in their clerkship year within our Middle Eastern culture provides an effective method of evaluation because it offers a reliable, valid, and feasible process.
Energy Harvesting Research: The Road from Single Source to Multisource.
Bai, Yang; Jantunen, Heli; Juuti, Jari
2018-06-07
Energy harvesting technology may be considered an ultimate solution to replace batteries and provide a long-term power supply for wireless sensor networks. Looking back into its research history, individual energy harvesters for the conversion of single energy sources into electricity are developed first, followed by hybrid counterparts designed for use with multiple energy sources. Very recently, the concept of a truly multisource energy harvester built from only a single piece of material as the energy conversion component is proposed. This review, from the aspect of materials and device configurations, explains in detail a wide scope to give an overview of energy harvesting research. It covers single-source devices including solar, thermal, kinetic and other types of energy harvesters, hybrid energy harvesting configurations for both single and multiple energy sources and single material, and multisource energy harvesters. It also includes the energy conversion principles of photovoltaic, electromagnetic, piezoelectric, triboelectric, electrostatic, electrostrictive, thermoelectric, pyroelectric, magnetostrictive, and dielectric devices. This is one of the most comprehensive reviews conducted to date, focusing on the entire energy harvesting research scene and providing a guide to seeking deeper and more specific research references and resources from every corner of the scientific community. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method
NASA Astrophysics Data System (ADS)
Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao
2016-09-01
To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.
Long-term monitoring on environmental disasters using multi-source remote sensing technique
NASA Astrophysics Data System (ADS)
Kuo, Y. C.; Chen, C. F.
2017-12-01
Environmental disasters are extreme events within the earth's system that cause deaths and injuries to humans, as well as causing damages and losses of valuable assets, such as buildings, communication systems, farmlands, forest and etc. In disaster management, a large amount of multi-temporal spatial data is required. Multi-source remote sensing data with different spatial, spectral and temporal resolutions is widely applied on environmental disaster monitoring. With multi-source and multi-temporal high resolution images, we conduct rapid, systematic and seriate observations regarding to economic damages and environmental disasters on earth. It is based on three monitoring platforms: remote sensing, UAS (Unmanned Aircraft Systems) and ground investigation. The advantages of using UAS technology include great mobility and availability in real-time rapid and more flexible weather conditions. The system can produce long-term spatial distribution information from environmental disasters, obtaining high-resolution remote sensing data and field verification data in key monitoring areas. It also supports the prevention and control on ocean pollutions, illegally disposed wastes and pine pests in different scales. Meanwhile, digital photogrammetry can be applied on the camera inside and outside the position parameters to produce Digital Surface Model (DSM) data. The latest terrain environment information is simulated by using DSM data, and can be used as references in disaster recovery in the future.
Lyness, Karen S; Judiesch, Michael K
2008-07-01
The present study was the first cross-national examination of whether managers who were perceived to be high in work-life balance were expected to be more or less likely to advance in their careers than were less balanced, more work-focused managers. Using self ratings, peer ratings, and supervisor ratings of 9,627 managers in 33 countries, the authors examined within-source and multisource relationships with multilevel analyses. The authors generally found that managers who were rated higher in work-life balance were rated higher in career advancement potential than were managers who were rated lower in work-life balance. However, national gender egalitarianism, measured with Project GLOBE scores, moderated relationships based on supervisor and self ratings, with stronger positive relationships in low egalitarian cultures. The authors also found 3-way interactions of work-life balance ratings, ratee gender, and gender egalitarianism in multisource analyses in which self balance ratings predicted supervisor and peer ratings of advancement potential. Work-life balance ratings were positively related to advancement potential ratings for women in high egalitarian cultures and men in low gender egalitarian cultures, but relationships were nonsignificant for men in high egalitarian cultures and women in low egalitarian cultures.
Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline
Zhang, Jie; Li, Qingyang; Caselli, Richard J.; Thompson, Paul M.; Ye, Jieping; Wang, Yalin
2017-01-01
Alzheimer’s Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms. PMID:28943731
Mercury⊕: An evidential reasoning image classifier
NASA Astrophysics Data System (ADS)
Peddle, Derek R.
1995-12-01
MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.
Dong, Yuntao; Liao, Hui; Chuang, Aichia; Zhou, Jing; Campbell, Elizabeth M
2015-09-01
Integrating insights from the literature on customers' central role in service and the literature on employee creativity, we offer theoretical and empirical account of how and when customer empowering behaviors can motivate employee creativity during service encounters and, subsequently, influence customer satisfaction with service experience. Using multilevel, multisource, experience sampling data from 380 hairstylists matched with 3550 customers in 118 hair salons, we found that customer empowering behaviors were positively related to employee creativity and subsequent customer satisfaction via employee state promotion focus. Results also showed that empowering behaviors from different agents function synergistically in shaping employee creativity: supervisory empowering leadership strengthened the indirect effect of customer empowering behaviors on employee creativity via state promotion focus. (c) 2015 APA, all rights reserved).
Parikesit; Salim, H; Triharyanto, E; Gunawan, B; Sunardi; Abdoellah, O S; Ohtsuka, R
2005-01-01
The Citarum River in West Java is the largest water supplier to the Saguling Dam, which plays a major role in electric power generation for the entire Java Island and is used for the aquaculture of marketed fish. To elucidate the extent of degradation in water quality and its causes in the Upper Citarum watershed, physical, chemical and biological parameters for water samples collected from various sites were analyzed. The results demonstrate large site-to-site variations in water qualities and pollutant loads derived from various human activities such as agriculture, cattle raising and the textile industry. To halt worsening conditions of the Citarum watershed, integrated mitigation efforts should be made, taking biophysical pollution mechanisms and local socioeconomic conditions into account.
WHO Expert Committee on Specifications for Pharmaceutical Preparations.
2012-01-01
The Expert Committee on Specifications for Pharmaceutical Preparations works towards clear, independent and practical standards and guidelines for the quality assurance of medicines. Standards are developed by the Committee through worldwide consultation and an international consensus-building process. The following new guidelines were adopted and recommended for use: Development of monographs for The International Pharmacopoeia; WHO good manufacturing practices: water for pharmaceutical use; Pharmaceutical development of multisource (generic) pharmaceutical products--points to consider; Guidelines on submission of documentation for a multisource (generic) finished pharmaceutical product for the WHO Prequalification of Medicines Programme: quality part; Development of paediatric medicines: points to consider in formulation; Recommendations for quality requirements for artemisinin as a starting material in the production of antimalarial active pharmaceutical ingredients.
NASA Astrophysics Data System (ADS)
Heitlager, Ilja; Helms, Remko; Brinkkemper, Sjaak
Information Technology Outsourcing practice and research mainly considers the outsourcing phenomenon as a generic fulfilment of the IT function by external parties. Inspired by the logic of commodity, core competencies and economies of scale; assets, existing departments and IT functions are transferred to external parties. Although the generic approach might work for desktop outsourcing, where standardisation is the dominant factor, it does not work for the management of mission critical applications. Managing mission critical applications requires a different approach where building relationships is critical. The relationships involve inter and intra organisational parties in a multi-sourcing arrangement, called an IT service chain, consisting of multiple (specialist) parties that have to collaborate closely to deliver high quality services.
Gao, Lin; Li, Chang-chun; Wang, Bao-shan; Yang Gui-jun; Wang, Lei; Fu, Kui
2016-01-01
With the innovation of remote sensing technology, remote sensing data sources are more and more abundant. The main aim of this study was to analyze retrieval accuracy of soybean leaf area index (LAI) based on multi-source remote sensing data including ground hyperspectral, unmanned aerial vehicle (UAV) multispectral and the Gaofen-1 (GF-1) WFV data. Ratio vegetation index (RVI), normalized difference vegetation index (NDVI), soil-adjusted vegetation index (SAVI), difference vegetation index (DVI), and triangle vegetation index (TVI) were used to establish LAI retrieval models, respectively. The models with the highest calibration accuracy were used in the validation. The capability of these three kinds of remote sensing data for LAI retrieval was assessed according to the estimation accuracy of models. The experimental results showed that the models based on the ground hyperspectral and UAV multispectral data got better estimation accuracy (R² was more than 0.69 and RMSE was less than 0.4 at 0.01 significance level), compared with the model based on WFV data. The RVI logarithmic model based on ground hyperspectral data was little superior to the NDVI linear model based on UAV multispectral data (The difference in E(A), R² and RMSE were 0.3%, 0.04 and 0.006, respectively). The models based on WFV data got the lowest estimation accuracy with R2 less than 0.30 and RMSE more than 0.70. The effects of sensor spectral response characteristics, sensor geometric location and spatial resolution on the soybean LAI retrieval were discussed. The results demonstrated that ground hyperspectral data were advantageous but not prominent over traditional multispectral data in soybean LAI retrieval. WFV imagery with 16 m spatial resolution could not meet the requirements of crop growth monitoring at field scale. Under the condition of ensuring the high precision in retrieving soybean LAI and working efficiently, the approach to acquiring agricultural information by UAV remote sensing could yet be regarded as an optimal plan. Therefore, in the case of more and more available remote sensing information sources, agricultural UAV remote sensing could become an important information resource for guiding field-scale crop management and provide more scientific and accurate information for precision agriculture research.
NASA Astrophysics Data System (ADS)
Zhong, L.; Ma, Y.; Ma, W.; Zou, M.; Hu, Y.
2016-12-01
Actual evapotranspiration (ETa) is an important component of the water cycle in the Tibetan Plateau. It is controlled by many hydrological and meteorological factors. Therefore, it is of great significance to estimate ETa accurately and continuously. It is also drawing much attention of scientific community to understand land surface parameters and land-atmosphere water exchange processes in small watershed-scale areas. Based on in-situ meteorological data in the Nagqu river basin and surrounding regions, the main meteorological factors affecting the evaporation process were quantitatively analyzed and the point-scale ETa estimation models in the study area were successfully built. On the other hand, multi-source satellite data (such as SPOT, MODIS, FY-2C) were used to derive the surface characteristics in the river basin. A time series processing technique was applied to remove cloud cover and reconstruct data series. Then improved land surface albedo, improved downward shortwave radiation flux and reconstructed normalized difference vegetation index (NDVI) were coupled into the topographical enhanced surface energy balance system to estimate ETa. The model-estimated results were compared with those ETa values determined by combinatory method. The results indicated that the model-estimated ETa agreed well with in-situ measurements with correlation coefficient, mean bias error and root mean square error of 0.836, 0.087 and 0.140 mm/h respectively.
Raciti, Steve M; Hutyra, Lucy R; Newell, Jared D
2014-12-01
High resolution maps of urban vegetation and biomass are powerful tools for policy-makers and community groups seeking to reduce rates of urban runoff, moderate urban heat island effects, and mitigate the effects of greenhouse gas emissions. We developed a very high resolution map of urban tree biomass, assessed the scale sensitivities in biomass estimation, compared our results with lower resolution estimates, and explored the demographic relationships in biomass distribution across the City of Boston. We integrated remote sensing data (including LiDAR-based tree height estimates) and field-based observations to map canopy cover and aboveground tree carbon storage at ~1m spatial scale. Mean tree canopy cover was estimated to be 25.5±1.5% and carbon storage was 355Gg (28.8MgCha(-1)) for the City of Boston. Tree biomass was highest in forest patches (110.7MgCha(-1)), but residential (32.8MgCha(-1)) and developed open (23.5MgCha(-1)) land uses also contained relatively high carbon stocks. In contrast with previous studies, we did not find significant correlations between tree biomass and the demographic characteristics of Boston neighborhoods, including income, education, race, or population density. The proportion of households that rent was negatively correlated with urban tree biomass (R(2)=0.26, p=0.04) and correlated with Priority Planting Index values (R(2)=0.55, p=0.001), potentially reflecting differences in land management among rented and owner-occupied residential properties. We compared our very high resolution biomass map to lower resolution biomass products from other sources and found that those products consistently underestimated biomass within urban areas. This underestimation became more severe as spatial resolution decreased. This research demonstrates that 1) urban areas contain considerable tree carbon stocks; 2) canopy cover and biomass may not be related to the demographic characteristics of Boston neighborhoods; and 3) that recent advances in high resolution remote sensing have the potential to improve the characterization and management of urban vegetation. Copyright © 2014 Elsevier B.V. All rights reserved.
Yu, Miaoyu; Law, Samuel; Dang, Kien; Byrne, Niall
2016-04-01
Psychiatry as a field and undergraduate psychiatry education (UPE) specifically have historically been in the periphery of medicine in China, unlike the relatively central role they occupy in the West. During the current economic reform, Chinese undergraduate medical education (UME) is undergoing significant changes and standardization under the auspices of the national accreditation body. A comparative study, using Bereday's comparative education methodology and Feldmann's evaluative criteria as theoretical frameworks, to gain understanding of the differences and similarities between China and the West in terms of UPE can contribute to the UME reform, and specifically UPE development in China, and promote cross-cultural understanding. The authors employed multi-sourced information to perform a comparative study of UPE, using the University of Toronto as a representative of the western model and Guangxi Medical University, a typical program in China, as the Chinese counterpart. Key contrasts are numerous; highlights include the difference in age and level of education of the entrants to medical school, centrally vs. locally developed UPE curriculum, level of integration with the rest of medical education, visibility within the medical school, adequacy of teaching resources, amount of clinical learning experience, opportunity for supervision and mentoring, and methods of student assessment. Examination of the existing, multi-sourced information reveals some fundamental differences in the current UPE between the representative Chinese and western programs, reflecting historical, political, cultural, and socioeconomic circumstances of the respective settings. The current analyses show some areas worthy of further exploration to inform Chinese UPE reform. The current research is a practical beginning to the development of a deeper collaborative dialogue about psychiatry and its educational underpinnings between China and the West.
NASA Astrophysics Data System (ADS)
Liu, Wenbin; Sun, Fubao; Li, Yanzhong; Zhang, Guoqing; Sang, Yan-Fang; Lim, Wee Ho; Liu, Jiahong; Wang, Hong; Bai, Peng
2018-01-01
The dynamics of basin-scale water budgets over the Tibetan Plateau (TP) are not well understood nowadays due to the lack of in situ hydro-climatic observations. In this study, we investigate the seasonal cycles and trends of water budget components (e.g. precipitation P, evapotranspiration ET and runoff Q) in 18 TP river basins during the period 1982-2011 through the use of multi-source datasets (e.g. in situ observations, satellite retrievals, reanalysis outputs and land surface model simulations). A water balance-based two-step procedure, which considers the changes in basin-scale water storage on the annual scale, is also adopted to calculate actual ET. The results indicated that precipitation (mainly snowfall from mid-autumn to next spring), which are mainly concentrated during June-October (varied among different monsoons-impacted basins), was the major contributor to the runoff in TP basins. The P, ET and Q were found to marginally increase in most TP basins during the past 30 years except for the upper Yellow River basin and some sub-basins of Yalong River, which were mainly affected by the weakening east Asian monsoon. Moreover, the aridity index (PET/P) and runoff coefficient (Q/P) decreased slightly in most basins, which were in agreement with the warming and moistening climate in the Tibetan Plateau. The results obtained demonstrated the usefulness of integrating multi-source datasets to hydrological applications in the data-sparse regions. More generally, such an approach might offer helpful insights into understanding the water and energy budgets and sustainability of water resource management practices of data-sparse regions in a changing environment.
Incomplete Multisource Transfer Learning.
Ding, Zhengming; Shao, Ming; Fu, Yun
2018-02-01
Transfer learning is generally exploited to adapt well-established source knowledge for learning tasks in weakly labeled or unlabeled target domain. Nowadays, it is common to see multiple sources available for knowledge transfer, each of which, however, may not include complete classes information of the target domain. Naively merging multiple sources together would lead to inferior results due to the large divergence among multiple sources. In this paper, we attempt to utilize incomplete multiple sources for effective knowledge transfer to facilitate the learning task in target domain. To this end, we propose an incomplete multisource transfer learning through two directional knowledge transfer, i.e., cross-domain transfer from each source to target, and cross-source transfer. In particular, in cross-domain direction, we deploy latent low-rank transfer learning guided by iterative structure learning to transfer knowledge from each single source to target domain. This practice reinforces to compensate for any missing data in each source by the complete target data. While in cross-source direction, unsupervised manifold regularizer and effective multisource alignment are explored to jointly compensate for missing data from one portion of source to another. In this way, both marginal and conditional distribution discrepancy in two directions would be mitigated. Experimental results on standard cross-domain benchmarks and synthetic data sets demonstrate the effectiveness of our proposed model in knowledge transfer from incomplete multiple sources.
NASA Astrophysics Data System (ADS)
Li, J.; Wen, G.; Li, D.
2018-04-01
Trough mastering background information of Yunnan province grassland resources utilization and ecological conditions to improves grassland elaborating management capacity, it carried out grassland resource investigation work by Yunnan province agriculture department in 2017. The traditional grassland resource investigation method is ground based investigation, which is time-consuming and inefficient, especially not suitable for large scale and hard-to-reach areas. While remote sensing is low cost, wide range and efficient, which can reflect grassland resources present situation objectively. It has become indispensable grassland monitoring technology and data sources and it has got more and more recognition and application in grassland resources monitoring research. This paper researches application of multi-source remote sensing image in Yunnan province grassland resources investigation. First of all, it extracts grassland resources thematic information and conducts field investigation through BJ-2 high space resolution image segmentation. Secondly, it classifies grassland types and evaluates grassland degradation degree through high resolution characteristics of Landsat 8 image. Thirdly, it obtained grass yield model and quality classification through high resolution and wide scanning width characteristics of MODIS images and sample investigate data. Finally, it performs grassland field qualitative analysis through UAV remote sensing image. According to project area implementation, it proves that multi-source remote sensing data can be applied to the grassland resources investigation in Yunnan province and it is indispensable method.
Modeling multi-source flooding disaster and developing simulation framework in Delta
NASA Astrophysics Data System (ADS)
Liu, Y.; Cui, X.; Zhang, W.
2016-12-01
Most Delta regions of the world are densely populated and with advanced economies. However, due to impact of the multi-source flooding (upstream flood, rainstorm waterlogging, storm surge flood), the Delta regions is very vulnerable. The academic circles attach great importance to the multi-source flooding disaster in these areas. The Pearl River Delta urban agglomeration in south China is selected as the research area. Based on analysis of natural and environmental characteristics data of the Delta urban agglomeration(remote sensing data, land use data, topographic map, etc.), hydrological monitoring data, research of the uneven distribution and process of regional rainfall, the relationship between the underlying surface and the parameters of runoff, effect of flood storage pattern, we use an automatic or semi-automatic method for dividing spatial units to reflect the runoff characteristics in urban agglomeration, and develop an Multi-model Ensemble System in changing environment, including urban hydrologic model, parallel computational 1D&2D hydrodynamic model, storm surge forecast model and other professional models, the system will have the abilities like real-time setting a variety of boundary conditions, fast and real-time calculation, dynamic presentation of results, powerful statistical analysis function. The model could be optimized and improved by a variety of verification methods. This work was supported by the National Natural Science Foundation of China (41471427); Special Basic Research Key Fund for Central Public Scientific Research Institutes.
Analysis of flood inundation in ungauged basins based on multi-source remote sensing data.
Gao, Wei; Shen, Qiu; Zhou, Yuehua; Li, Xin
2018-02-09
Floods are among the most expensive natural hazards experienced in many places of the world and can result in heavy losses of life and economic damages. The objective of this study is to analyze flood inundation in ungauged basins by performing near-real-time detection with flood extent and depth based on multi-source remote sensing data. Via spatial distribution analysis of flood extent and depth in a time series, the inundation condition and the characteristics of flood disaster can be reflected. The results show that the multi-source remote sensing data can make up the lack of hydrological data in ungauged basins, which is helpful to reconstruct hydrological sequence; the combination of MODIS (moderate-resolution imaging spectroradiometer) surface reflectance productions and the DFO (Dartmouth Flood Observatory) flood database can achieve the macro-dynamic monitoring of the flood inundation in ungauged basins, and then the differential technique of high-resolution optical and microwave images before and after floods can be used to calculate flood extent to reflect spatial changes of inundation; the monitoring algorithm for the flood depth combining RS and GIS is simple and easy and can quickly calculate the depth with a known flood extent that is obtained from remote sensing images in ungauged basins. Relevant results can provide effective help for the disaster relief work performed by government departments.
Design and application of BIM based digital sand table for construction management
NASA Astrophysics Data System (ADS)
Fuquan, JI; Jianqiang, LI; Weijia, LIU
2018-05-01
This paper explores the design and application of BIM based digital sand table for construction management. Aiming at the demands and features of construction management plan for bridge and tunnel engineering, the key functional features of digital sand table should include three-dimensional GIS, model navigation, virtual simulation, information layers, and data exchange, etc. That involving the technology of 3D visualization and 4D virtual simulation of BIM, breakdown structure of BIM model and project data, multi-dimensional information layers, and multi-source data acquisition and interaction. Totally, the digital sand table is a visual and virtual engineering information integrated terminal, under the unified data standard system. Also, the applications shall contain visual constructing scheme, virtual constructing schedule, and monitoring of construction, etc. Finally, the applicability of several basic software to the digital sand table is analyzed.
NASA Astrophysics Data System (ADS)
Ifimov, Gabriela; Pigeau, Grace; Arroyo-Mora, J. Pablo; Soffer, Raymond; Leblanc, George
2017-10-01
In this study the development and implementation of a geospatial database model for the management of multiscale datasets encompassing airborne imagery and associated metadata is presented. To develop the multi-source geospatial database we have used a Relational Database Management System (RDBMS) on a Structure Query Language (SQL) server which was then integrated into ArcGIS and implemented as a geodatabase. The acquired datasets were compiled, standardized, and integrated into the RDBMS, where logical associations between different types of information were linked (e.g. location, date, and instrument). Airborne data, at different processing levels (digital numbers through geocorrected reflectance), were implemented in the geospatial database where the datasets are linked spatially and temporally. An example dataset consisting of airborne hyperspectral imagery, collected for inter and intra-annual vegetation characterization and detection of potential hydrocarbon seepage events over pipeline areas, is presented. Our work provides a model for the management of airborne imagery, which is a challenging aspect of data management in remote sensing, especially when large volumes of data are collected.
NASA Astrophysics Data System (ADS)
D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco P.; Pasquariello, Guido
2018-03-01
High-resolution, remotely sensed images of the Earth surface have been proven to be of help in producing detailed flood maps, thanks to their synoptic overview of the flooded area and frequent revisits. However, flood scenarios can be complex situations, requiring the integration of different data in order to provide accurate and robust flood information. Several processing approaches have been recently proposed to efficiently combine and integrate heterogeneous information sources. In this paper, we introduce DAFNE, a Matlab®-based, open source toolbox, conceived to produce flood maps from remotely sensed and other ancillary information, through a data fusion approach. DAFNE is based on Bayesian Networks, and is composed of several independent modules, each one performing a different task. Multi-temporal and multi-sensor data can be easily handled, with the possibility of following the evolution of an event through multi-temporal output flood maps. Each DAFNE module can be easily modified or upgraded to meet different user needs. The DAFNE suite is presented together with an example of its application.
Wang, Qi; Xie, Zhiyi; Li, Fangbai
2015-11-01
This study aims to identify and apportion multi-source and multi-phase heavy metal pollution from natural and anthropogenic inputs using ensemble models that include stochastic gradient boosting (SGB) and random forest (RF) in agricultural soils on the local scale. The heavy metal pollution sources were quantitatively assessed, and the results illustrated the suitability of the ensemble models for the assessment of multi-source and multi-phase heavy metal pollution in agricultural soils on the local scale. The results of SGB and RF consistently demonstrated that anthropogenic sources contributed the most to the concentrations of Pb and Cd in agricultural soils in the study region and that SGB performed better than RF. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gomez, Rapson; Burns, G Leonard; Walsh, James A; Hafetz, Nina
2005-04-01
Confirmatory factor analysis (CFA) was used to model a multitrait by multisource matrix to determine the convergent and discriminant validity of measures of attention-deficit hyperactivity disorder (ADHD)-inattention (IN), ADHD-hyperactivity/impulsivity (HI), and oppositional defiant disorder (ODD) in 917 Malaysian elementary school children. The three trait factors were ADHD-IN, ADHDHI, and ODD. The two source factors were parents and teachers. Similar to earlier studies with Australian and Brazilian children, the parent and teacher measures failed to show convergent and discriminant validity with Malaysian children. The study outlines the implications of such strong source effects in ADHD-IN, ADHD-HI, and ODD measures for the use of such parent and teacher scales to study the symptom dimensions.
Multisource least-squares reverse-time migration with structure-oriented filtering
NASA Astrophysics Data System (ADS)
Fan, Jing-Wen; Li, Zhen-Chun; Zhang, Kai; Zhang, Min; Liu, Xue-Tong
2016-09-01
The technology of simultaneous-source acquisition of seismic data excited by several sources can significantly improve the data collection efficiency. However, direct imaging of simultaneous-source data or blended data may introduce crosstalk noise and affect the imaging quality. To address this problem, we introduce a structure-oriented filtering operator as preconditioner into the multisource least-squares reverse-time migration (LSRTM). The structure-oriented filtering operator is a nonstationary filter along structural trends that suppresses crosstalk noise while maintaining structural information. The proposed method uses the conjugate-gradient method to minimize the mismatch between predicted and observed data, while effectively attenuating the interference noise caused by exciting several sources simultaneously. Numerical experiments using synthetic data suggest that the proposed method can suppress the crosstalk noise and produce highly accurate images.
Al Ansari, Ahmed; Al Khalifa, Khalid; Al Azzawi, Mohamed; Al Amer, Rashed; Al Sharqi, Dana; Al-Mansoor, Anwar; Munshi, Fadi M
2015-01-01
Background We aimed to design, implement, and evaluate the feasibility and reliability of a multisource feedback (MSF) system to assess interns in their clerkship year in the Middle Eastern culture, the Kingdom of Bahrain. Method The study was undertaken in the Bahrain Defense Force Hospital, a military teaching hospital in the Kingdom of Bahrain. A total of 21 interns (who represent the total population of the interns for the given year) were assessed in this study. All of the interns were rotating through our hospital during their year-long clerkship rotation. The study sample consisted of nine males and 12 females. Each participating intern was evaluated by three groups of raters, eight medical intern colleagues, eight senior medical colleagues, and eight coworkers from different departments. Results A total of 21 interns (nine males and 12 females) were assessed in this study. The total mean response rates were 62.3%. A factor analysis was conducted that found that the data on the questionnaire grouped into three factors that counted for 76.4% of the total variance. These three factors were labeled as professionalism, collaboration, and communication. Reliability analysis indicated that the full instrument scale had high internal consistency (Cronbach’s α 0.98). The generalizability coefficients for the surveys were estimated to be 0.78. Conclusion Based on our results and analysis, we conclude that the MSF tool we used on the interns rotating in their clerkship year within our Middle Eastern culture provides an effective method of evaluation because it offers a reliable, valid, and feasible process. PMID:26316836
Crossley, James G M
2015-01-01
Nurse appraisal is well established in the Western world because of its obvious educational advantages. Appraisal works best with many sources of information on performance. Multisource feedback (MSF) is widely used in business and in other clinical disciplines to provide such information. It has also been incorporated into nursing appraisals, but, so far, none of the instruments in use for nurses has been validated. We set out to develop an instrument aligned with the UK Knowledge and Skills Framework (KSF) and to evaluate its reliability and feasibility across a wide hospital-based nursing population. The KSF framework provided a content template. Focus groups developed an instrument based on consensus. The instrument was administered to all the nursing staff in 2 large NHS hospitals forming a single trust in London, England. We used generalizability analysis to estimate reliability, response rates and unstructured interviews to evaluate feasibility, and factor structure and correlation studies to evaluate validity. On a voluntary basis the response rate was moderate (60%). A failure to engage with information technology and employment-related concerns were commonly cited as reasons for not responding. In this population, 11 responses provided a profile with sufficient reliability to inform appraisal (G = 0.7). Performance on the instrument was closely and significantly correlated with performance on a KSF questionnaire. This is the first contemporary psychometric evaluation of an MSF instrument for nurses. MSF appears to be as valid and reliable as an assessment method to inform appraisal in nurses as it is in other health professional groups. © 2015 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Education.
Connected Vehicle Applications : Mobility
DOT National Transportation Integrated Search
2017-03-03
Connected vehicle mobility applications are commonly referred to as dynamic mobility applications (DMAs). DMAs seek to fully leverage frequently collected and rapidly disseminated multi-source data gathered from connected travelers, vehicles, and inf...
A Geospatial Information Grid Framework for Geological Survey.
Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong
2015-01-01
The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper.
A Geospatial Information Grid Framework for Geological Survey
Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong
2015-01-01
The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper. PMID:26710255
Object-oriented recognition of high-resolution remote sensing image
NASA Astrophysics Data System (ADS)
Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan
2016-01-01
With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .
NASA Astrophysics Data System (ADS)
Yuan, Yanbin; Zhou, You; Zhu, Yaqiong; Yuan, Xiaohui; Sælthun, N. R.
2007-11-01
Based on digital technology, flood routing simulation system development is an important component of "digital catchment". Taking QingJiang catchment as a pilot case, in-depth analysis on informatization of Qingjiang catchment management being the basis, aiming at catchment data's multi-source, - dimension, -element, -subject, -layer and -class feature, the study brings the design thought and method of "subject-point-source database" (SPSD) to design system structure in order to realize the unified management of catchments data in great quantity. Using the thought of integrated spatial information technology for reference, integrating hierarchical structure development model of digital catchment is established. The model is general framework of the flood routing simulation system analysis, design and realization. In order to satisfy the demands of flood routing three-dimensional simulation system, the object-oriented spatial data model are designed. We can analyze space-time self-adapting relation between flood routing and catchments topography, express grid data of terrain by using non-directed graph, apply breadth first search arithmetic, set up search method for the purpose of dynamically searching stream channel on the basis of simulated three-dimensional terrain. The system prototype is therefore realized. Simulation results have demonstrated that the proposed approach is feasible and effective in the application.
Integrating multisource land use and land cover data
Wright, Bruce E.; Tait, Mike; Lins, K.F.; Crawford, J.S.; Benjamin, S.P.; Brown, Jesslyn F.
1995-01-01
As part of the U.S. Geological Survey's (USGS) land use and land cover (LULC) program, the USGS in cooperation with the Environmental Systems Research Institute (ESRI) is collecting and integrating LULC data for a standard USGS 1:100,000-scale product. The LULC data collection techniques include interpreting spectrally clustered Landsat Thematic Mapper (TM) images; interpreting 1-meter resolution digital panchromatic orthophoto images; and, for comparison, aggregating locally available large-scale digital data of urban areas. The area selected is the Vancouver, WA-OR quadrangle, which has a mix of urban, rural agriculture, and forest land. Anticipated products include an integrated LULC prototype data set in a standard classification scheme referenced to the USGS digital line graph (DLG) data of the area and prototype software to develop digital LULC data sets.This project will evaluate a draft standard LULC classification system developed by the USGS for use with various source material and collection techniques. Federal, State, and local governments, and private sector groups will have an opportunity to evaluate the resulting prototype software and data sets and to provide recommendations. It is anticipated that this joint research endeavor will increase future collaboration among interested organizations, public and private, for LULC data collection using common standards and tools.
Surface daytime net radiation estimation using artificial neural networks
Jiang, Bo; Zhang, Yi; Liang, Shunlin; ...
2014-11-11
Net all-wave surface radiation (R n) is one of the most important fundamental parameters in various applications. However, conventional R n measurements are difficult to collect because of the high cost and ongoing maintenance of recording instruments. Therefore, various empirical R n estimation models have been developed. This study presents the results of two artificial neural network (ANN) models (general regression neural networks (GRNN) and Neuroet) to estimate R n globally from multi-source data, including remotely sensed products, surface measurements, and meteorological reanalysis products. R n estimates provided by the two ANNs were tested against in-situ radiation measurements obtained frommore » 251 global sites between 1991–2010 both in global mode (all data were used to fit the models) and in conditional mode (the data were divided into four subsets and the models were fitted separately). Based on the results obtained from extensive experiments, it has been proved that the two ANNs were superior to linear-based empirical models in both global and conditional modes and that the GRNN performed better and was more stable than Neuroet. The GRNN estimates had a determination coefficient (R 2) of 0.92, a root mean square error (RMSE) of 34.27 W·m –2 , and a bias of –0.61 W·m –2 in global mode based on the validation dataset. In conclusion, ANN methods are a potentially powerful tool for global R n estimation.« less
NASA Astrophysics Data System (ADS)
Wang, Lei; Xiong, Chuang; Wang, Xiaojun; Li, Yunlong; Xu, Menghui
2018-04-01
Considering that multi-source uncertainties from inherent nature as well as the external environment are unavoidable and severely affect the controller performance, the dynamic safety assessment with high confidence is of great significance for scientists and engineers. In view of this, the uncertainty quantification analysis and time-variant reliability estimation corresponding to the closed-loop control problems are conducted in this study under a mixture of random, interval, and convex uncertainties. By combining the state-space transformation and the natural set expansion, the boundary laws of controlled response histories are first confirmed with specific implementation of random items. For nonlinear cases, the collocation set methodology and fourth Rounge-Kutta algorithm are introduced as well. Enlightened by the first-passage model in random process theory as well as by the static probabilistic reliability ideas, a new definition of the hybrid time-variant reliability measurement is provided for the vibration control systems and the related solution details are further expounded. Two engineering examples are eventually presented to demonstrate the validity and applicability of the methodology developed.
NASA Astrophysics Data System (ADS)
Yang, Guijun; Yang, Hao; Jin, Xiuliang; Pignatti, Stefano; Casa, Faffaele; Silverstro, Paolo Cosmo
2016-08-01
Drought is the most costly natural disasters in China and all over the world. It is very important to evaluate the drought-induced crop yield losses and further improve water use efficiency at regional scale. Firstly, crop biomass was estimated by the combined use of Synthetic Aperture Radar (SAR) and optical remote sensing data. Then the estimated biophysical variable was assimilated into crop growth model (FAO AquaCrop) by the Particle Swarm Optimization (PSO) method from farmland scale to regional scale.At farmland scale, the most important crop parameters of AquaCrop model were determined to reduce the used parameters in assimilation procedure. The Extended Fourier Amplitude Sensitivity Test (EFAST) method was used for assessing the contribution of different crop parameters to model output. Moreover, the AquaCrop model was calibrated using the experiment data in Xiaotangshan, Beijing.At regional scale, spatial application of our methods were carried out and validated in the rural area of Yangling, Shaanxi Province, in 2014. This study will provide guideline to make irrigation decision of balancing of water consumption and yield loss.
Kim, Sunghun; Sterling, Bobbie Sue; Latimer, Lara
2010-01-01
Developing focused and relevant health promotion interventions is critical for behavioral change in a low-resource or special population. Evidence-based interventions, however, may not match the specific population or health concern of interest. This article describes the Multi-Source Method (MSM) which, in combination with a workshop format, may be used by health professionals and researchers in health promotion program development. The MSM draws on positive deviance practices and processes, focus groups, community advisors, behavioral change theory, and evidence-based strategies. Use of the MSM is illustrated in development of ethnic-specific weight loss interventions for low-income postpartum women. The MSM may be useful in designing future health programs designed for other special populations for whom existing interventions are unavailable or lack relevance. PMID:20433674
Multi-sources data fusion framework for remote triage prioritization in telehealth.
Salman, O H; Rasid, M F A; Saripan, M I; Subramaniam, S K
2014-09-01
The healthcare industry is streamlining processes to offer more timely and effective services to all patients. Computerized software algorithm and smart devices can streamline the relation between users and doctors by providing more services inside the healthcare telemonitoring systems. This paper proposes a multi-sources framework to support advanced healthcare applications. The proposed framework named Multi Sources Healthcare Architecture (MSHA) considers multi-sources: sensors (ECG, SpO2 and Blood Pressure) and text-based inputs from wireless and pervasive devices of Wireless Body Area Network. The proposed framework is used to improve the healthcare scalability efficiency by enhancing the remote triaging and remote prioritization processes for the patients. The proposed framework is also used to provide intelligent services over telemonitoring healthcare services systems by using data fusion method and prioritization technique. As telemonitoring system consists of three tiers (Sensors/ sources, Base station and Server), the simulation of the MSHA algorithm in the base station is demonstrated in this paper. The achievement of a high level of accuracy in the prioritization and triaging patients remotely, is set to be our main goal. Meanwhile, the role of multi sources data fusion in the telemonitoring healthcare services systems has been demonstrated. In addition to that, we discuss how the proposed framework can be applied in a healthcare telemonitoring scenario. Simulation results, for different symptoms relate to different emergency levels of heart chronic diseases, demonstrate the superiority of our algorithm compared with conventional algorithms in terms of classify and prioritize the patients remotely.
Gradient-Type Magnetoelectric Current Sensor with Strong Multisource Noise Suppression.
Zhang, Mingji; Or, Siu Wing
2018-02-14
A novel gradient-type magnetoelectric (ME) current sensor operating in magnetic field gradient (MFG) detection and conversion mode is developed based on a pair of ME composites that have a back-to-back capacitor configuration under a baseline separation and a magnetic biasing in an electrically-shielded and mechanically-enclosed housing. The physics behind the current sensing process is the product effect of the current-induced MFG effect associated with vortex magnetic fields of current-carrying cables (i.e., MFG detection) and the MFG-induced ME effect in the ME composite pair (i.e., MFG conversion). The sensor output voltage is directly obtained from the gradient ME voltage of the ME composite pair and is calibrated against cable current to give the current sensitivity. The current sensing performance of the sensor is evaluated, both theoretically and experimentally, under multisource noises of electric fields, magnetic fields, vibrations, and thermals. The sensor combines the merits of small nonlinearity in the current-induced MFG effect with those of high sensitivity and high common-mode noise rejection rate in the MFG-induced ME effect to achieve a high current sensitivity of 0.65-12.55 mV/A in the frequency range of 10 Hz-170 kHz, a small input-output nonlinearity of <500 ppm, a small thermal drift of <0.2%/℃ in the current range of 0-20 A, and a high common-mode noise rejection rate of 17-28 dB from multisource noises.
Gradient-Type Magnetoelectric Current Sensor with Strong Multisource Noise Suppression
2018-01-01
A novel gradient-type magnetoelectric (ME) current sensor operating in magnetic field gradient (MFG) detection and conversion mode is developed based on a pair of ME composites that have a back-to-back capacitor configuration under a baseline separation and a magnetic biasing in an electrically-shielded and mechanically-enclosed housing. The physics behind the current sensing process is the product effect of the current-induced MFG effect associated with vortex magnetic fields of current-carrying cables (i.e., MFG detection) and the MFG-induced ME effect in the ME composite pair (i.e., MFG conversion). The sensor output voltage is directly obtained from the gradient ME voltage of the ME composite pair and is calibrated against cable current to give the current sensitivity. The current sensing performance of the sensor is evaluated, both theoretically and experimentally, under multisource noises of electric fields, magnetic fields, vibrations, and thermals. The sensor combines the merits of small nonlinearity in the current-induced MFG effect with those of high sensitivity and high common-mode noise rejection rate in the MFG-induced ME effect to achieve a high current sensitivity of 0.65–12.55 mV/A in the frequency range of 10 Hz–170 kHz, a small input-output nonlinearity of <500 ppm, a small thermal drift of <0.2%/℃ in the current range of 0–20 A, and a high common-mode noise rejection rate of 17–28 dB from multisource noises. PMID:29443920
The assessment of pathologists/laboratory medicine physicians through a multisource feedback tool.
Lockyer, Jocelyn M; Violato, Claudio; Fidler, Herta; Alakija, Pauline
2009-08-01
There is increasing interest in ensuring that physicians demonstrate the full range of Accreditation Council for Graduate Medical Education competencies. To determine whether it is possible to develop a feasible and reliable multisource feedback instrument for pathologists and laboratory medicine physicians. Surveys with 39, 30, and 22 items were developed to assess individual physicians by 8 peers, 8 referring physicians, and 8 coworkers (eg, technologists, secretaries), respectively, using 5-point scales and an unable-to-assess category. Physicians completed a self-assessment survey. Items addressed key competencies related to clinical competence, collaboration, professionalism, and communication. Data from 101 pathologists and laboratory medicine physicians were analyzed. The mean number of respondents per physician was 7.6, 7.4, and 7.6 for peers, referring physicians, and coworkers, respectively. The reliability of the internal consistency, measured by Cronbach alpha, was > or = .95 for the full scale of all instruments. Analysis indicated that the medical peer, referring physician, and coworker instruments achieved a generalizability coefficient of .78, .81, and .81, respectively. Factor analysis showed 4 factors on the peer questionnaire accounted for 68.8% of the total variance: reports and clinical competency, collaboration, educational leadership, and professional behavior. For the referring physician survey, 3 factors accounted for 66.9% of the variance: professionalism, reports, and clinical competency. Two factors on the coworker questionnaire accounted for 59.9% of the total variance: communication and professionalism. It is feasible to assess this group of physicians using multisource feedback with instruments that are reliable.
A proposal to extend our understanding of the global economy
NASA Technical Reports Server (NTRS)
Hough, Robbin R.; Ehlers, Manfred
1991-01-01
Satellites acquire information on a global and repetitive basis. They are thus ideal tools for use when global scale and analysis over time is required. Data from satellites comes in digital form which means that it is ideally suited for incorporation in digital data bases and that it can be evaluated using automated techniques. The development of a global multi-source data set which integrates digital information is proposed regarding some 15,000 major industrial sites worldwide with remotely sensed images of the sites. The resulting data set would provide the basis for a wide variety of studies of the global economy. The preliminary results give promise of a new class of global policy model which is far more detailed and helpful to local policy makers than its predecessors. The central thesis of this proposal is that major industrial sites can be identified and their utilization can be tracked with the aid of satellite images.
A Novel Artificial Bee Colony Based Clustering Algorithm for Categorical Data
2015-01-01
Data with categorical attributes are ubiquitous in the real world. However, existing partitional clustering algorithms for categorical data are prone to fall into local optima. To address this issue, in this paper we propose a novel clustering algorithm, ABC-K-Modes (Artificial Bee Colony clustering based on K-Modes), based on the traditional k-modes clustering algorithm and the artificial bee colony approach. In our approach, we first introduce a one-step k-modes procedure, and then integrate this procedure with the artificial bee colony approach to deal with categorical data. In the search process performed by scout bees, we adopt the multi-source search inspired by the idea of batch processing to accelerate the convergence of ABC-K-Modes. The performance of ABC-K-Modes is evaluated by a series of experiments in comparison with that of the other popular algorithms for categorical data. PMID:25993469
A novel artificial bee colony based clustering algorithm for categorical data.
Ji, Jinchao; Pang, Wei; Zheng, Yanlin; Wang, Zhe; Ma, Zhiqiang
2015-01-01
Data with categorical attributes are ubiquitous in the real world. However, existing partitional clustering algorithms for categorical data are prone to fall into local optima. To address this issue, in this paper we propose a novel clustering algorithm, ABC-K-Modes (Artificial Bee Colony clustering based on K-Modes), based on the traditional k-modes clustering algorithm and the artificial bee colony approach. In our approach, we first introduce a one-step k-modes procedure, and then integrate this procedure with the artificial bee colony approach to deal with categorical data. In the search process performed by scout bees, we adopt the multi-source search inspired by the idea of batch processing to accelerate the convergence of ABC-K-Modes. The performance of ABC-K-Modes is evaluated by a series of experiments in comparison with that of the other popular algorithms for categorical data.
Study on Data Clustering and Intelligent Decision Algorithm of Indoor Localization
NASA Astrophysics Data System (ADS)
Liu, Zexi
2018-01-01
Indoor positioning technology enables the human beings to have the ability of positional perception in architectural space, and there is a shortage of single network coverage and the problem of location data redundancy. So this article puts forward the indoor positioning data clustering algorithm and intelligent decision-making research, design the basic ideas of multi-source indoor positioning technology, analyzes the fingerprint localization algorithm based on distance measurement, position and orientation of inertial device integration. By optimizing the clustering processing of massive indoor location data, the data normalization pretreatment, multi-dimensional controllable clustering center and multi-factor clustering are realized, and the redundancy of locating data is reduced. In addition, the path is proposed based on neural network inference and decision, design the sparse data input layer, the dynamic feedback hidden layer and output layer, low dimensional results improve the intelligent navigation path planning.
Retrieval of biophysical parameters with AVIRIS and ISM: The Landes Forest, south west France
NASA Technical Reports Server (NTRS)
Zagolski, F.; Gastellu-Etchegorry, J. P.; Mougin, E.; Giordano, G.; Marty, G.; Letoan, T.; Beaudoin, A.
1992-01-01
The first steps of an experiment for investigating the capability of airborne spectrometer data for retrieval of biophysical parameters of vegetation, especially water conditions are presented. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and ISM data were acquired in the frame of the 1991 NASA/JPL and CNES campaigns on the Landes, South west France, a large and flat forest area with mainly maritime pines. In-situ measurements were completed at that time; i.e. reflectance spectra, atmospheric profiles, sampling for further laboratory analyses of elements concentrations (lignin, water, cellulose, nitrogen,...). All information was integrated in an already existing data base (age, LAI, DBH, understory cover,...). A methodology was designed for (1) obtaining geometrically and atmospherically corrected reflectance data, (2) registering all available information, and (3) analyzing these multi-source informations. Our objective is to conduct comparative studies with simulation reflectance models, and to improve these models, especially in the MIR.
Fan, Yuanjie; Yin, Yuehong
2013-12-01
Although exoskeletons have received enormous attention and have been widely used in gait training and walking assistance in recent years, few reports addressed their application during early poststroke rehabilitation. This paper presents a healthcare technology for active and progressive early rehabilitation using multisource information fusion from surface electromyography and force-position extended physiological proprioception. The active-compliance control based on interaction force between patient and exoskeleton is applied to accelerate the recovery of the neuromuscular function, whereby progressive treatment through timely evaluation contributes to an effective and appropriate physical rehabilitation. Moreover, a clinic-oriented rehabilitation system, wherein a lower extremity exoskeleton with active compliance is mounted on a standing bed, is designed to ensure comfortable and secure rehabilitation according to the structure and control requirements. Preliminary experiments and clinical trial demonstrate valuable information on the feasibility, safety, and effectiveness of the progressive exoskeleton-assisted training.
Are multisource levothyroxine sodium tablets marketed in Egypt interchangeable?
Abou-Taleb, Basant A; Bondok, Maha; Nounou, Mohamed Ismail; Khalafallah, Nawal; Khalil, Saleh
2018-02-01
A clinical study was initiated in response to patients' complaints, supported by the treating physicians, of suspected differences in efficacy among multisource levothyroxine sodium tablets marketed in Egypt. The study design was a multiple dose (100μg levothyroxine sodium tablet once daily for 6 months) and involved 50 primary hypothyroidism female patients (5 equal groups). Tablets administered included five tablet batches (two brands, three origin locations) purchased from local pharmacies in Alexandria. Assessment parameters (measured on consecutive visits) included the thyroid stimulating hormone, total and free levothyroxine. Tablet dissolution rate was determined (BP/EP 2014 & USP 2014). In vitro vs in vivovs correlations were developed. Clinical and pharmaceutical data confirmed inter-brand and inter-source differences in efficacy. Correlations examined indicated potential usefulness of in vitro dissolution test in detecting poor performing levothyroxine sodium tablets during shelf life. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Yonggang
In implementation of nuclear safeguards, many different techniques are being used to monitor operation of nuclear facilities and safeguard nuclear materials, ranging from radiation detectors, flow monitors, video surveillance, satellite imagers, digital seals to open source search and reports of onsite inspections/verifications. Each technique measures one or more unique properties related to nuclear materials or operation processes. Because these data sets have no or loose correlations, it could be beneficial to analyze the data sets together to improve the effectiveness and efficiency of safeguards processes. Advanced visualization techniques and machine-learning based multi-modality analysis could be effective tools in such integratedmore » analysis. In this project, we will conduct a survey of existing visualization and analysis techniques for multi-source data and assess their potential values in nuclear safeguards.« less
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan Miguel
2013-01-01
Research biobanks are often composed by data from multiple sources. In some cases, these different subsets of data may present dissimilarities among their probability density functions (PDF) due to spatial shifts. This, may lead to wrong hypothesis when treating the data as a whole. Also, the overall quality of the data is diminished. With the purpose of developing a generic and comparable metric to assess the stability of multi-source datasets, we have studied the applicability and behaviour of several PDF distances over shifts on different conditions (such as uni- and multivariate, different types of variable, and multi-modality) which may appear in real biomedical data. From the studied distances, we found information-theoretic based and Earth Mover's Distance to be the most practical distances for most conditions. We discuss the properties and usefulness of each distance according to the possible requirements of a general stability metric.
Multisource drug policies in Latin America: survey of 10 countries.
Homedes, Núria; Ugalde, Antonio
2005-01-01
Essential drug lists and generic drug policies have been promoted as strategies to improve access to pharmaceuticals and control their rapidly escalating costs. This article reports the results of a preliminary survey conducted in 10 Latin American countries. The study aimed to document the experiences of different countries in defining and implementing generic drug policies, determine the cost of registering different types of pharmaceutical products and the time needed to register them, and uncover the incentives governments have developed to promote the use of multisource drugs. The survey instrument was administered in person in Chile, Ecuador and Peru and by email in Argentina, Brazil, Bolivia, Colombia, Costa Rica, Nicaragua and Uruguay. There was a total of 22 respondents. Survey responses indicated that countries use the terms generic and bioequivalence differently. We suggest there is a need to harmonize definitions and technical concepts. PMID:15682251
Using multilevel, multisource needs assessment data for planning community interventions.
Levy, Susan R; Anderson, Emily E; Issel, L Michele; Willis, Marilyn A; Dancy, Barbara L; Jacobson, Kristin M; Fleming, Shirley G; Copper, Elizabeth S; Berrios, Nerida M; Sciammarella, Esther; Ochoa, Mónica; Hebert-Beirne, Jennifer
2004-01-01
African Americans and Latinos share higher rates of cardiovascular disease (CVD) and diabetes compared with Whites. These diseases have common risk factors that are amenable to primary and secondary prevention. The goal of the Chicago REACH 2010-Lawndale Health Promotion Project is to eliminate disparities related to CVD and diabetes experienced by African Americans and Latinos in two contiguous Chicago neighborhoods using a community-based prevention approach. This article shares findings from the Phase 1 participatory planning process and discusses the implications these findings and lessons learned may have for programs aiming to reduce health disparities in multiethnic communities. The triangulation of data sources from the planning phase enriched interpretation and led to more creative and feasible suggestions for programmatic interventions across the four levels of the ecological framework. Multisource data yielded useful information for program planning and a better understanding of the cultural differences and similarities between African Americans and Latinos.
Effective Coping With Supervisor Conflict Depends on Control: Implications for Work Strains.
Eatough, Erin M; Chang, Chu-Hsiang
2018-01-11
This study examined the interactive effects of interpersonal conflict at work, coping strategy, and perceived control specific to the conflict on employee work strain using multisource and time-lagged data across two samples. In Sample 1, multisource data was collected from 438 employees as well as data from participant-identified secondary sources (e.g., significant others, best friends). In Sample 2, time-lagged data from 100 full-time employees was collected in a constructive replication. Overall, findings suggested that the success of coping efforts as indicated by lower strains hinges on the combination of the severity of the stressor, perceived control over the stressor, and coping strategy used (problem-focused vs. emotion-focused coping). Results from the current study provide insights for why previous efforts to document the moderating effects of coping have been inconsistent, especially with regards to emotion-focused coping. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
López de Ipiña, JM; Vaquero, C.; Gutierrez-Cañas, C.
2017-06-01
It is expected a progressive increase of the industrial processes that manufacture of intermediate (iNEPs) and end products incorporating ENMs (eNEPs) to bring about improved properties. Therefore, the assessment of occupational exposure to airborne NOAA will migrate, from the simple and well-controlled exposure scenarios in research laboratories and ENMs production plants using innovative production technologies, to much more complex exposure scenarios located around processes of manufacture of eNEPs that, in many cases, will be modified conventional production processes. Here will be discussed some of the typical challenging situations in the process of risk assessment of inhalation exposure to NOAA in Multi-Source Industrial Scenarios (MSIS), from the basis of the lessons learned when confronted to those scenarios in the frame of some European and Spanish research projects.
A Robust Linear Feature-Based Procedure for Automated Registration of Point Clouds
Poreba, Martyna; Goulette, François
2015-01-01
With the variety of measurement techniques available on the market today, fusing multi-source complementary information into one dataset is a matter of great interest. Target-based, point-based and feature-based methods are some of the approaches used to place data in a common reference frame by estimating its corresponding transformation parameters. This paper proposes a new linear feature-based method to perform accurate registration of point clouds, either in 2D or 3D. A two-step fast algorithm called Robust Line Matching and Registration (RLMR), which combines coarse and fine registration, was developed. The initial estimate is found from a triplet of conjugate line pairs, selected by a RANSAC algorithm. Then, this transformation is refined using an iterative optimization algorithm. Conjugates of linear features are identified with respect to a similarity metric representing a line-to-line distance. The efficiency and robustness to noise of the proposed method are evaluated and discussed. The algorithm is valid and ensures valuable results when pre-aligned point clouds with the same scale are used. The studies show that the matching accuracy is at least 99.5%. The transformation parameters are also estimated correctly. The error in rotation is better than 2.8% full scale, while the translation error is less than 12.7%. PMID:25594589
Retrieving Temperature Anomaly in the Global Subsurface and Deeper Ocean From Satellite Observations
NASA Astrophysics Data System (ADS)
Su, Hua; Li, Wene; Yan, Xiao-Hai
2018-01-01
Retrieving the subsurface and deeper ocean (SDO) dynamic parameters from satellite observations is crucial for effectively understanding ocean interior anomalies and dynamic processes, but it is challenging to accurately estimate the subsurface thermal structure over the global scale from sea surface parameters. This study proposes a new approach based on Random Forest (RF) machine learning to retrieve subsurface temperature anomaly (STA) in the global ocean from multisource satellite observations including sea surface height anomaly (SSHA), sea surface temperature anomaly (SSTA), sea surface salinity anomaly (SSSA), and sea surface wind anomaly (SSWA) via in situ Argo data for RF training and testing. RF machine-learning approach can accurately retrieve the STA in the global ocean from satellite observations of sea surface parameters (SSHA, SSTA, SSSA, SSWA). The Argo STA data were used to validate the accuracy and reliability of the results from the RF model. The results indicated that SSHA, SSTA, SSSA, and SSWA together are useful parameters for detecting SDO thermal information and obtaining accurate STA estimations. The proposed method also outperformed support vector regression (SVR) in global STA estimation. It will be a useful technique for studying SDO thermal variability and its role in global climate system from global-scale satellite observations.
Scaling dimensions in spectroscopy of soil and vegetation
NASA Astrophysics Data System (ADS)
Malenovský, Zbyněk; Bartholomeus, Harm M.; Acerbi-Junior, Fausto W.; Schopfer, Jürg T.; Painter, Thomas H.; Epema, Gerrit F.; Bregt, Arnold K.
2007-05-01
The paper revises and clarifies definitions of the term scale and scaling conversions for imaging spectroscopy of soil and vegetation. We demonstrate a new four-dimensional scale concept that includes not only spatial but also the spectral, directional and temporal components. Three scaling remote sensing techniques are reviewed: (1) radiative transfer, (2) spectral (un)mixing, and (3) data fusion. Relevant case studies are given in the context of their up- and/or down-scaling abilities over the soil/vegetation surfaces and a multi-source approach is proposed for their integration. Radiative transfer (RT) models are described to show their capacity for spatial, spectral up-scaling, and directional down-scaling within a heterogeneous environment. Spectral information and spectral derivatives, like vegetation indices (e.g. TCARI/OSAVI), can be scaled and even tested by their means. Radiative transfer of an experimental Norway spruce ( Picea abies (L.) Karst.) research plot in the Czech Republic was simulated by the Discrete Anisotropic Radiative Transfer (DART) model to prove relevance of the correct object optical properties scaled up to image data at two different spatial resolutions. Interconnection of the successive modelling levels in vegetation is shown. A future development in measurement and simulation of the leaf directional spectral properties is discussed. We describe linear and/or non-linear spectral mixing techniques and unmixing methods that demonstrate spatial down-scaling. Relevance of proper selection or acquisition of the spectral endmembers using spectral libraries, field measurements, and pure pixels of the hyperspectral image is highlighted. An extensive list of advanced unmixing techniques, a particular example of unmixing a reflective optics system imaging spectrometer (ROSIS) image from Spain, and examples of other mixture applications give insight into the present status of scaling capabilities. Simultaneous spatial and temporal down-scaling by means of a data fusion technique is described. A demonstrative example is given for the moderate resolution imaging spectroradiometer (MODIS) and LANDSAT Thematic Mapper (TM) data from Brazil. Corresponding spectral bands of both sensors were fused via a pyramidal wavelet transform in Fourier space. New spectral and temporal information of the resultant image can be used for thematic classification or qualitative mapping. All three described scaling techniques can be integrated as the relevant methodological steps within a complex multi-source approach. We present this concept of combining numerous optical remote sensing data and methods to generate inputs for ecosystem process models.
NASA Astrophysics Data System (ADS)
Xie, Yuchu; Gong, Jie; Sun, Peng; Gou, Xiaohua
2014-12-01
As one of the vital research highlights of global land use and cover change, oasis change and its interaction with landscape pattern have been regarded as an important content of regional environmental change research in arid areas. Jinta oasis, a typical agricultural oasis characterized by its dramatic exploitation and use of water and land resources in Hexi corridor, northwest arid region in China, was selected as a case to study the spatiotemporal oasis change and its effects on oasis landscape pattern. Based on integration of Keyhole satellite photographs, KATE-200 photographs, Landsat MSS, TM and ETM+ images, we evaluated and analyzed the status, trend and spatial pattern change of Jinta oasis and the characteristics of landscape pattern change by a set of mathematical models and combined this information with landscape metrics and community surveys. During the period of 1963a-2010a, Jinta oasis expanded gradually with an area increase of 219.15 km2, and the conversion between oasis and desert was frequent with a state of “imbalance-balance-extreme imbalance conditions”. Moreover, most of the changes took place in the ecotone between oasis and desert and the interior of oasis due to the reclamation of abandoned land, such as Yangjingziwan and Xiba townships. Furthermore, the area, size and spatial distribution of oasis were influenced by human activities and resulted in fundamental changes of oasis landscape pattern. The fractal characteristics, dispersion degree and fragmentation of Jinta oasis decreased and the oasis landscape tended to be simple and uniform. Oasis change trajectories and its landscape pattern were mainly influenced by water resource utilization, policies (especially land policies), demographic factors, technological advancements, as well as regional economic development. We found that time series analysis of multi-source remote sensing images and the application of an oasis change model provided a useful approach to monitor oasis change over a long-term period in arid area. It is recommended that the government and farmers should pay more attention to the fragility of the natural system and the government should enhance the leading role of environmental considerations in the development process of oasis change, particularly with respect to the utilization of the limited water and land resources in arid China.
Koch, Tobias; Schultze, Martin; Jeon, Minjeong; Nussbeck, Fridtjof W; Praetorius, Anna-Katharina; Eid, Michael
2016-01-01
Multirater (multimethod, multisource) studies are increasingly applied in psychology. Eid and colleagues (2008) proposed a multilevel confirmatory factor model for multitrait-multimethod (MTMM) data combining structurally different and multiple independent interchangeable methods (raters). In many studies, however, different interchangeable raters (e.g., peers, subordinates) are asked to rate different targets (students, supervisors), leading to violations of the independence assumption and to cross-classified data structures. In the present work, we extend the ML-CFA-MTMM model by Eid and colleagues (2008) to cross-classified multirater designs. The new C4 model (Cross-Classified CTC[M-1] Combination of Methods) accounts for nonindependent interchangeable raters and enables researchers to explicitly model the interaction between targets and raters as a latent variable. Using a real data application, it is shown how credibility intervals of model parameters and different variance components can be obtained using Bayesian estimation techniques.
NASA Astrophysics Data System (ADS)
Jacob, Rajani; Philip, Rachel Reena; Nazer, Sheeba; Abraham, Anitha; Nair, Sinitha B.; Pradeep, B.; Urmila, K. S.; Okram, G. S.
2014-01-01
Polycrystalline thin films of silver gallium selenide were deposited on ultrasonically cleaned soda lime glass substrates by multi-source vacuum co-evaporation technique. The structural analysis done by X-ray diffraction ascertained the formation of nano structured tetragonal chalcopyrite thin films. The compound formation was confirmed by X-ray photo-electron spectroscopy. Atomic force microscopic technique has been used for surface morphological analysis. Direct allowed band gap ˜1.78eV with high absorption coefficient ˜106/m was estimated from absorbance spectra. Low temperature thermoelectric effects has been investigated in the temperature range 80-330K which manifested an unusual increase in Seebeck coefficient with negligible phonon drag toward the very low and room temperature regime. The electrical resistivity of these n-type films was assessed to be ˜2.6Ωm and the films showed good photo response.
Knouse, Laura E; Traeger, Lara; O'Cleirigh, Conall; Safren, Steven A
2013-10-01
Relationships among attention deficit hyperactivity disorder (ADHD) symptoms and adult personality traits have not been examined in larger clinically diagnosed samples. We collected multisource ADHD symptom and self-report NEO Five-Factor Inventory (Costa and McCrae [Odessa, FL: Psychological Assessment Resources, Inc, 1992) data from 117 adults with ADHD and tested symptom-trait associations using structural equation modeling. The final model fit the data. Inattention was positively associated with neuroticism and negatively associated with conscientiousness. On the basis of ADHD expression in adulthood, hyperactivity and impulsivity were estimated as separate constructs and showed differential relationships to extraversion and agreeableness. A significant positive relationship between hyperactivity and conscientiousness arose in the context of other pathways. ADHD symptoms are reliably associated with personality traits, suggesting a complex interplay across development that warrants prospective study into adulthood.
NASA Astrophysics Data System (ADS)
Liu, Y. L.; Wei, C. J.; Yan, L.; Chi, T. H.; Wu, X. B.; Xiao, C. S.
2006-03-01
After the outbreak of highly pathogenic Avian Influenza (HPAI) in South Korea in the end of year 2003, estimates of the impact of HPAI in affected countries vary greatly, the total direct losses are about 3 billion US dollars, and it caused 15 million birds and poultry flocks death. It is significant to understand the spatial distribution and transmission characters of HPAI for its prevention and control. According to 50 outbreak cases for HPAI in Chinese mainland during 2004, this paper introduces the approach of spatial distribution and transmission characters for HPAI and its results. Its approach is based on remote sensing and GIS techniques. Its supporting data set involves normalized difference vegetation index (NDVI) and land surface temperature (Ts) derived from a time-series of remote sensing data of 1 kilometer-resolution NOAA/AVHRR, birds' migration routes, topology geographic map, lake and wetland maps, and meteorological observation data. In order to analyze synthetically using these data, a supporting platform for analysis Avian Influenza epidemic situation (SPAS/AI) was developed. Supporting by SPAS/AI, the integrated information from multi-sources can be easily used to the analysis of the spatial distribution and transmission character of HPAI. The results show that the range of spatial distribution and transmission of HPAI in China during 2004 connected to environment factors NDVI, Ts and the distributions of lake and wetland, and especially to bird migration routes. To some extent, the results provide some suggestions for the macro-decision making for the prevention and control of HPAI in the areas of potential risk and reoccurrence.
Study on key technologies of optimization of big data for thermal power plant performance
NASA Astrophysics Data System (ADS)
Mao, Mingyang; Xiao, Hong
2018-06-01
Thermal power generation accounts for 70% of China's power generation, the pollutants accounted for 40% of the same kind of emissions, thermal power efficiency optimization needs to monitor and understand the whole process of coal combustion and pollutant migration, power system performance data show explosive growth trend, The purpose is to study the integration of numerical simulation of big data technology, the development of thermal power plant efficiency data optimization platform and nitrogen oxide emission reduction system for the thermal power plant to improve efficiency, energy saving and emission reduction to provide reliable technical support. The method is big data technology represented by "multi-source heterogeneous data integration", "large data distributed storage" and "high-performance real-time and off-line computing", can greatly enhance the energy consumption capacity of thermal power plants and the level of intelligent decision-making, and then use the data mining algorithm to establish the boiler combustion mathematical model, mining power plant boiler efficiency data, combined with numerical simulation technology to find the boiler combustion and pollutant generation rules and combustion parameters of boiler combustion and pollutant generation Influence. The result is to optimize the boiler combustion parameters, which can achieve energy saving.
NASA Astrophysics Data System (ADS)
Anton, S. R.; Taylor, S. G.; Raby, E. Y.; Farinholt, K. M.
2013-03-01
With a global interest in the development of clean, renewable energy, wind energy has seen steady growth over the past several years. Advances in wind turbine technology bring larger, more complex turbines and wind farms. An important issue in the development of these complex systems is the ability to monitor the state of each turbine in an effort to improve the efficiency and power generation. Wireless sensor nodes can be used to interrogate the current state and health of wind turbine structures; however, a drawback of most current wireless sensor technology is their reliance on batteries for power. Energy harvesting solutions present the ability to create autonomous power sources for small, low-power electronics through the scavenging of ambient energy; however, most conventional energy harvesting systems employ a single mode of energy conversion, and thus are highly susceptible to variations in the ambient energy. In this work, a multi-source energy harvesting system is developed to power embedded electronics for wind turbine applications in which energy can be scavenged simultaneously from several ambient energy sources. Field testing is performed on a full-size, residential scale wind turbine where both vibration and solar energy harvesting systems are utilized to power wireless sensing systems. Two wireless sensors are investigated, including the wireless impedance device (WID) sensor node, developed at Los Alamos National Laboratory (LANL), and an ultra-low power RF system-on-chip board that is the basis for an embedded wireless accelerometer node currently under development at LANL. Results indicate the ability of the multi-source harvester to successfully power both sensors.
van der Meulen, Mirja W; Boerebach, Benjamin C M; Smirnova, Alina; Heeneman, Sylvia; Oude Egbrink, Mirjam G A; van der Vleuten, Cees P M; Arah, Onyebuchi A; Lombarts, Kiki M J M H
2017-01-01
Multisource feedback (MSF) instruments are used to and must feasibly provide reliable and valid data on physicians' performance from multiple perspectives. The "INviting Co-workers to Evaluate Physicians Tool" (INCEPT) is a multisource feedback instrument used to evaluate physicians' professional performance as perceived by peers, residents, and coworkers. In this study, we report on the validity, reliability, and feasibility of the INCEPT. The performance of 218 physicians was assessed by 597 peers, 344 residents, and 822 coworkers. Using explorative and confirmatory factor analyses, multilevel regression analyses between narrative and numerical feedback, item-total correlations, interscale correlations, Cronbach's α and generalizability analyses, the psychometric qualities, and feasibility of the INCEPT were investigated. For all respondent groups, three factors were identified, although constructed slightly different: "professional attitude," "patient-centeredness," and "organization and (self)-management." Internal consistency was high for all constructs (Cronbach's α ≥ 0.84 and item-total correlations ≥ 0.52). Confirmatory factor analyses indicated acceptable to good fit. Further validity evidence was given by the associations between narrative and numerical feedback. For reliable total INCEPT scores, three peer, two resident and three coworker evaluations were needed; for subscale scores, evaluations of three peers, three residents and three to four coworkers were sufficient. The INCEPT instrument provides physicians performance feedback in a valid and reliable way. The number of evaluations to establish reliable scores is achievable in a regular clinical department. When interpreting feedback, physicians should consider that respondent groups' perceptions differ as indicated by the different item clustering per performance factor.
NASA Astrophysics Data System (ADS)
Liu, Shuai; Chen, Ge; Yao, Shifeng; Tian, Fenglin; Liu, Wei
2017-07-01
This paper presents a novel integrated marine visualization framework which focuses on processing, analyzing the multi-dimension spatiotemporal marine data in one workflow. Effective marine data visualization is needed in terms of extracting useful patterns, recognizing changes, and understanding physical processes in oceanography researches. However, the multi-source, multi-format, multi-dimension characteristics of marine data pose a challenge for interactive and feasible (timely) marine data analysis and visualization in one workflow. And, global multi-resolution virtual terrain environment is also needed to give oceanographers and the public a real geographic background reference and to help them to identify the geographical variation of ocean phenomena. This paper introduces a data integration and processing method to efficiently visualize and analyze the heterogeneous marine data. Based on the data we processed, several GPU-based visualization methods are explored to interactively demonstrate marine data. GPU-tessellated global terrain rendering using ETOPO1 data is realized and the video memory usage is controlled to ensure high efficiency. A modified ray-casting algorithm for the uneven multi-section Argo volume data is also presented and the transfer function is designed to analyze the 3D structure of ocean phenomena. Based on the framework we designed, an integrated visualization system is realized. The effectiveness and efficiency of the framework is demonstrated. This system is expected to make a significant contribution to the demonstration and understanding of marine physical process in a virtual global environment.
Learning mechanisms to limit medication administration errors.
Drach-Zahavy, Anat; Pud, Dorit
2010-04-01
This paper is a report of a study conducted to identify and test the effectiveness of learning mechanisms applied by the nursing staff of hospital wards as a means of limiting medication administration errors. Since the influential report ;To Err Is Human', research has emphasized the role of team learning in reducing medication administration errors. Nevertheless, little is known about the mechanisms underlying team learning. Thirty-two hospital wards were randomly recruited. Data were collected during 2006 in Israel by a multi-method (observations, interviews and administrative data), multi-source (head nurses, bedside nurses) approach. Medication administration error was defined as any deviation from procedures, policies and/or best practices for medication administration, and was identified using semi-structured observations of nurses administering medication. Organizational learning was measured using semi-structured interviews with head nurses, and the previous year's reported medication administration errors were assessed using administrative data. The interview data revealed four learning mechanism patterns employed in an attempt to learn from medication administration errors: integrated, non-integrated, supervisory and patchy learning. Regression analysis results demonstrated that whereas the integrated pattern of learning mechanisms was associated with decreased errors, the non-integrated pattern was associated with increased errors. Supervisory and patchy learning mechanisms were not associated with errors. Superior learning mechanisms are those that represent the whole cycle of team learning, are enacted by nurses who administer medications to patients, and emphasize a system approach to data analysis instead of analysis of individual cases.
Scalable Metadata Management for a Large Multi-Source Seismic Data Repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaylord, J. M.; Dodge, D. A.; Magana-Zook, S. A.
In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity.
Impact of workplace based assessment on doctors' education and performance: a systematic review.
Miller, Alice; Archer, Julian
2010-09-24
To investigate the literature for evidence that workplace based assessment affects doctors' education and performance. Systematic review. The primary data sources were the databases Journals@Ovid, Medline, Embase, CINAHL, PsycINFO, and ERIC. Evidence based reviews (Bandolier, Cochrane Library, DARE, HTA Database, and NHS EED) were accessed and searched via the Health Information Resources website. Reference lists of relevant studies and bibliographies of review articles were also searched. Review methods Studies of any design that attempted to evaluate either the educational impact of workplace based assessment, or the effect of workplace based assessment on doctors' performance, were included. Studies were excluded if the sampled population was non-medical or the study was performed with medical students. Review articles, commentaries, and letters were also excluded. The final exclusion criterion was the use of simulated patients or models rather than real life clinical encounters. Sixteen studies were included. Fifteen of these were non-comparative descriptive or observational studies; the other was a randomised controlled trial. Study quality was mixed. Eight studies examined multisource feedback with mixed results; most doctors felt that multisource feedback had educational value, although the evidence for practice change was conflicting. Some junior doctors and surgeons displayed little willingness to change in response to multisource feedback, whereas family physicians might be more prepared to initiate change. Performance changes were more likely to occur when feedback was credible and accurate or when coaching was provided to help subjects identify their strengths and weaknesses. Four studies examined the mini-clinical evaluation exercise, one looked at direct observation of procedural skills, and three were concerned with multiple assessment methods: all these studies reported positive results for the educational impact of workplace based assessment tools. However, there was no objective evidence of improved performance with these tools. Considering the emphasis placed on workplace based assessment as a method of formative performance assessment, there are few published articles exploring its impact on doctors' education and performance. This review shows that multisource feedback can lead to performance improvement, although individual factors, the context of the feedback, and the presence of facilitation have a profound effect on the response. There is no evidence that alternative workplace based assessment tools (mini-clinical evaluation exercise, direct observation of procedural skills, and case based discussion) lead to improvement in performance, although subjective reports on their educational impact are positive.
Study on paddy rice yield estimation based on multisource data and the Grey system theory
NASA Astrophysics Data System (ADS)
Deng, Wensheng; Wang, Wei; Liu, Hai; Li, Chen; Ge, Yimin; Zheng, Xianghua
2009-10-01
The paddy rice is our important crops. In study of the paddy rice yield estimation, compared with the scholars who usually only take the remote sensing data or meteorology as the influence factors, we combine the remote sensing and the meteorological data to make the monitoring result closer reality. Although the gray system theory has used in many aspects, it is applied very little in paddy rice yield estimation. This study introduces it to the paddy rice yield estimation, and makes the yield estimation model. This can resolve small data sets problem that can not be solved by deterministic model. It selects some regions in Jianghan plain for the study area. The data includes multi-temporal remote sensing image, meteorological and statistic data. The remote sensing data is the 16-day composite images (250-m spatial resolution) of MODIS. The meteorological data includes monthly average temperature, sunshine duration and rain fall amount. The statistical data is the long-term paddy rice yield of the study area. Firstly, it extracts the paddy rice planting area from the multi-temporal MODIS images with the help of GIS and RS. Then taking the paddy rice yield as the reference sequence, MODIS data and meteorological data as the comparative sequence, computing the gray correlative coefficient, it selects the yield estimation factor based on the grey system theory. Finally, using the factors, it establishes the yield estimation model and does the result test. The result indicated that the method is feasible and the conclusion is credible. It can provide the scientific method and reference value to carry on the region paddy rice remote sensing estimation.
NASA Astrophysics Data System (ADS)
Fieuzal, R.; Marais Sicre, C.; Baup, F.
2017-05-01
The yield forecasting of corn constitutes a key issue in agricultural management, particularly in the context of demographic pressure and climate change. This study presents two methods to estimate yields using artificial neural networks: a diagnostic approach based on all the satellite data acquired throughout the agricultural season, and a real-time approach, where estimates are updated after each image was acquired in the microwave and optical domains (Formosat-2, Spot-4/5, TerraSAR-X, and Radarsat-2) throughout the crop cycle. The results are based on the Multispectral Crop Monitoring experimental campaign conducted by the CESBIO (Centre d'Études de la BIOsphère) laboratory in 2010 over an agricultural region in southwestern France. Among the tested sensor configurations (multi-frequency, multi-polarization or multi-source data), the best yield estimation performance (using the diagnostic approach) is obtained with reflectance acquired in the red wavelength region, with a coefficient of determination of 0.77 and an RMSE of 6.6 q ha-1. In the real-time approach the combination of red reflectance and CHH backscattering coefficients provides the best compromise between the accuracy and earliness of the yield estimate (more than 3 months before the harvest), with an R2 of 0.69 and an RMSE of 7.0 q ha-1 during the development of the central stem. The two best yield estimates are similar in most cases (for more than 80% of the monitored fields), and the differences are related to discrepancies in the crop growth cycle and/or the consequences of pests.
NASA Astrophysics Data System (ADS)
Ren, Y.
2017-12-01
Context Land surface temperatures (LSTs) spatio-temporal distribution pattern of urban forests are influenced by many ecological factors; the identification of interaction between these factors can improve simulations and predictions of spatial patterns of urban cold islands. This quantitative research requires an integrated method that combines multiple sources data with spatial statistical analysis. Objectives The purpose of this study was to clarify urban forest LST influence interaction between anthropogenic activities and multiple ecological factors using cluster analysis of hot and cold spots and Geogdetector model. We introduced the hypothesis that anthropogenic activity interacts with certain ecological factors, and their combination influences urban forests LST. We also assumed that spatio-temporal distributions of urban forest LST should be similar to those of ecological factors and can be represented quantitatively. Methods We used Jinjiang as a representative city in China as a case study. Population density was employed to represent anthropogenic activity. We built up a multi-source data (forest inventory, digital elevation models (DEM), population, and remote sensing imagery) on a unified urban scale to support urban forest LST influence interaction research. Through a combination of spatial statistical analysis results, multi-source spatial data, and Geogdetector model, the interaction mechanisms of urban forest LST were revealed. Results Although different ecological factors have different influences on forest LST, in two periods with different hot spots and cold spots, the patch area and dominant tree species were the main factors contributing to LST clustering in urban forests. The interaction between anthropogenic activity and multiple ecological factors increased LST in urban forest stands, linearly and nonlinearly. Strong interactions between elevation and dominant species were generally observed and were prevalent in either hot or cold spots areas in different years. Conclusions In conclusion, a combination of spatial statistics and GeogDetector models should be effective for quantitatively evaluating interactive relationships among ecological factors, anthropogenic activity and LST.
Bim-Gis Integrated Geospatial Information Model Using Semantic Web and Rdf Graphs
NASA Astrophysics Data System (ADS)
Hor, A.-H.; Jadidi, A.; Sohn, G.
2016-06-01
In recent years, 3D virtual indoor/outdoor urban modelling becomes a key spatial information framework for many civil and engineering applications such as evacuation planning, emergency and facility management. For accomplishing such sophisticate decision tasks, there is a large demands for building multi-scale and multi-sourced 3D urban models. Currently, Building Information Model (BIM) and Geographical Information Systems (GIS) are broadly used as the modelling sources. However, data sharing and exchanging information between two modelling domains is still a huge challenge; while the syntactic or semantic approaches do not fully provide exchanging of rich semantic and geometric information of BIM into GIS or vice-versa. This paper proposes a novel approach for integrating BIM and GIS using semantic web technologies and Resources Description Framework (RDF) graphs. The novelty of the proposed solution comes from the benefits of integrating BIM and GIS technologies into one unified model, so-called Integrated Geospatial Information Model (IGIM). The proposed approach consists of three main modules: BIM-RDF and GIS-RDF graphs construction, integrating of two RDF graphs, and query of information through IGIM-RDF graph using SPARQL. The IGIM generates queries from both the BIM and GIS RDF graphs resulting a semantically integrated model with entities representing both BIM classes and GIS feature objects with respect to the target-client application. The linkage between BIM-RDF and GIS-RDF is achieved through SPARQL endpoints and defined by a query using set of datasets and entity classes with complementary properties, relationships and geometries. To validate the proposed approach and its performance, a case study was also tested using IGIM system design.
NCC: A Multidisciplinary Design/Analysis Tool for Combustion Systems
NASA Technical Reports Server (NTRS)
Liu, Nan-Suey; Quealy, Angela
1999-01-01
A multi-disciplinary design/analysis tool for combustion systems is critical for optimizing the low-emission, high-performance combustor design process. Based on discussions between NASA Lewis Research Center and the jet engine companies, an industry-government team was formed in early 1995 to develop the National Combustion Code (NCC), which is an integrated system of computer codes for the design and analysis of combustion systems. NCC has advanced features that address the need to meet designer's requirements such as "assured accuracy", "fast turnaround", and "acceptable cost". The NCC development team is comprised of Allison Engine Company (Allison), CFD Research Corporation (CFDRC), GE Aircraft Engines (GEAE), NASA Lewis Research Center (LeRC), and Pratt & Whitney (P&W). This development team operates under the guidance of the NCC steering committee. The "unstructured mesh" capability and "parallel computing" are fundamental features of NCC from its inception. The NCC system is composed of a set of "elements" which includes grid generator, main flow solver, turbulence module, turbulence and chemistry interaction module, chemistry module, spray module, radiation heat transfer module, data visualization module, and a post-processor for evaluating engine performance parameters. Each element may have contributions from several team members. Such a multi-source multi-element system needs to be integrated in a way that facilitates inter-module data communication, flexibility in module selection, and ease of integration.
Velpuri, N.M.; Senay, G.B.; Asante, K.O.
2012-01-01
Lake Turkana is one of the largest desert lakes in the world and is characterized by high degrees of interand intra-annual fluctuations. The hydrology and water balance of this lake have not been well understood due to its remote location and unavailability of reliable ground truth datasets. Managing surface water resources is a great challenge in areas where in-situ data are either limited or unavailable. In this study, multi-source satellite-driven data such as satellite-based rainfall estimates, modelled runoff, evapotranspiration, and a digital elevation dataset were used to model Lake Turkana water levels from 1998 to 2009. Due to the unavailability of reliable lake level data, an approach is presented to calibrate and validate the water balance model of Lake Turkana using a composite lake level product of TOPEX/Poseidon, Jason-1, and ENVISAT satellite altimetry data. Model validation results showed that the satellitedriven water balance model can satisfactorily capture the patterns and seasonal variations of the Lake Turkana water level fluctuations with a Pearson's correlation coefficient of 0.90 and a Nash-Sutcliffe Coefficient of Efficiency (NSCE) of 0.80 during the validation period (2004-2009). Model error estimates were within 10% of the natural variability of the lake. Our analysis indicated that fluctuations in Lake Turkana water levels are mainly driven by lake inflows and over-the-lake evaporation. Over-the-lake rainfall contributes only up to 30% of lake evaporative demand. During the modelling time period, Lake Turkana showed seasonal variations of 1-2m. The lake level fluctuated in the range up to 4m between the years 1998-2009. This study demonstrated the usefulness of satellite altimetry data to calibrate and validate the satellite-driven hydrological model for Lake Turkana without using any in-situ data. Furthermore, for Lake Turkana, we identified and outlined opportunities and challenges of using a calibrated satellite-driven water balance model for (i) quantitative assessment of the impact of basin developmental activities on lake levels and for (ii) forecasting lake level changes and their impact on fisheries. From this study, we suggest that globally available satellite altimetry data provide a unique opportunity for calibration and validation of hydrologic models in ungauged basins. ?? Author(s) 2012.
Small Scale Multisource Site – Hydrogeology Investigation
A site impacted by brackish water was evaluated using traditional hydrogeologic and geochemical site characterization techniques. No single, specific source of the brine impacted ground water was identified. However, the extent of the brine impacted ground water was found to be...
Conceptual design of multi-source CCS pipeline transportation network for Polish energy sector
NASA Astrophysics Data System (ADS)
Isoli, Niccolo; Chaczykowski, Maciej
2017-11-01
The aim of this study was to identify an optimal CCS transport infrastructure for Polish energy sector in regards of selected European Commission Energy Roadmap 2050 scenario. The work covers identification of the offshore storage site location, CO2 pipeline network design and sizing for deployment at a national scale along with CAPEX analysis. It was conducted for the worst-case scenario, wherein the power plants operate under full-load conditions. The input data for the evaluation of CO2 flow rates (flue gas composition) were taken from the selected cogeneration plant with the maximum electric capacity of 620 MW and the results were extrapolated from these data given the power outputs of the remaining units. A graph search algorithm was employed to estimate pipeline infrastructure costs to transport 95 MT of CO2 annually, which amount to about 612.6 M€. Additional pipeline infrastructure costs will have to be incurred after 9 years of operation of the system due to limited storage site capacity. The results show that CAPEX estimates for CO2 pipeline infrastructure cannot be relied on natural gas infrastructure data, since both systems exhibit differences in pipe wall thickness that affects material cost.
DOT National Transportation Integrated Search
2018-01-01
Connected vehicle mobility applications are commonly referred to as dynamic mobility applications (DMAs). DMAs seek to fully leverage frequently collected and rapidly disseminated multi-source data gathered from connected travelers, vehicles, and inf...
Advancing Future Network Science through Content Understanding
2014-05-01
BitTorrent, PostgreSQL, MySQL , and GRSecurity) and emerging technologies (HadoopDFS, Tokutera, Sector/Sphere, HBase, and other BigTable-like...result. • Multi-Source Network Pulse Analyzer and Correlator provides course of action planning by enhancing the understanding of the complex dynamics
L1-norm locally linear representation regularization multi-source adaptation learning.
Tao, Jianwen; Wen, Shiting; Hu, Wenjun
2015-09-01
In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multisource, Phase-controlled Radiofrequency for Treatment of Skin Laxity
Moreno-Moraga, Javier; Muñoz, Estefania; Cornejo Navarro, Paloma
2011-01-01
Objective: The objective of this study was to analyze the correlation between degrees of clinical improvement and microscopic changes detected using confocal microscopy at the temperature gradients reached in patients treated for skin laxity with a phase-controlled, multisource radiofrequency system. Design and setting: Patients with skin laxity in the abdominal area were treated in six sessions with radiofrequency (the first 4 sessions were held at 2-week intervals and the 2 remaining sessions at 3-week intervals). Patients attended monitoring at 6, 9, and 12 months. Participants: 33 patients (all women). Measurements: The authors recorded the following: variations in weight, measurements of the contour of the treated area and control area, evaluation of clinical improvement by the clinician and by the patient, images taken using an infrared camera, temperature (before, immediately after, and 20 minutes after the procedure), and confocal microscopy images (before treatment and at 6, 9, and 12 months). The degree of clinical improvement was contrasted by two external observers (clinicians). The procedure was performed using a new phase-controlled, multipolar radiofrequency system. Results: The results reveal a greater degree of clinical improvement in patients with surface temperature increases greater than 11.5ºC at the end of the procedure and remaining greater than 4.5ºC 20 minutes later. These changes induced by radiofrequency were contrasted with the structural improvements observed at the dermal-epidermal junction using confocal microscopy. Changes are more intense and are statistically correlated with patients who show a greater degree of improvement and have higher temperature gradients at the end of the procedure and 20 minutes later. Conclusion: Monitoring and the use of parameters to evaluate end-point values in skin quality treatment by multisource, phased-controlled radiofrequency can help optimize aesthetic outcome. PMID:21278896
Using the 360 degrees multisource feedback model to evaluate teaching and professionalism.
Berk, Ronald A
2009-12-01
Student ratings have dominated as the primary and, frequently, only measure of teaching performance at colleges and universities for the past 50 years. Recently, there has been a trend toward augmenting those ratings with other data sources to broaden and deepen the evidence base. The 360 degrees multisource feedback (MSF) model used in management and industry for half a century and in clinical medicine for the last decade seemed like a best fit to evaluate teaching performance and professionalism. To adapt the 360 degrees MSF model to the assessment of teaching performance and professionalism of medical school faculty. The salient characteristics of the MSF models in industry and medicine were extracted from the literature. These characteristics along with 14 sources of evidence from eight possible raters, including students, self, peers, outside experts, mentors, alumni, employers, and administrators, based on the research in higher education were adapted to formative and summative decisions. Three 360 degrees MSF models were generated for three different decisions: (1) formative decisions and feedback about teaching improvement; (2) summative decisions and feedback for merit pay and contract renewal; and (3) formative decisions and feedback about professional behaviors in the academic setting. The characteristics of each model were listed. Finally, a top-10 list of the most persistent and, perhaps, intractable psychometric issues in executing these models was suggested to guide future research. The 360 degrees MSF model appears to be a useful framework for implementing a multisource evaluation of faculty teaching performance and professionalism in medical schools. This model can provide more accurate, reliable, fair, and equitable decisions than the one based on just a single source.
Specialty-specific multi-source feedback: assuring validity, informing training.
Davies, Helena; Archer, Julian; Bateman, Adrian; Dewar, Sandra; Crossley, Jim; Grant, Janet; Southgate, Lesley
2008-10-01
The white paper 'Trust, Assurance and Safety: the Regulation of Health Professionals in the 21st Century' proposes a single, generic multi-source feedback (MSF) instrument in the UK. Multi-source feedback was proposed as part of the assessment programme for Year 1 specialty training in histopathology. An existing instrument was modified following blueprinting against the histopathology curriculum to establish content validity. Trainees were also assessed using an objective structured practical examination (OSPE). Factor analysis and correlation between trainees' OSPE performance and the MSF were used to explore validity. All 92 trainees participated and the assessor response rate was 93%. Reliability was acceptable with eight assessors (95% confidence interval 0.38). Factor analysis revealed two factors: 'generic' and 'histopathology'. Pearson correlation of MSF scores with OSPE performances was 0.48 (P = 0.001) and the histopathology factor correlated more highly (histopathology r = 0.54, generic r = 0.42; t = - 2.76, d.f. = 89, P < 0.01). Trainees scored least highly in relation to ability to use histopathology to solve clinical problems (mean = 4.39) and provision of good reports (mean = 4.39). Three of six doctors whose means were < 4.0 received free text comments about report writing. There were 83 forms with aggregate scores of < 4. Of these, 19.2% included comments about report writing. Specialty-specific MSF is feasible and achieves satisfactory reliability. The higher correlation of the 'histopathology' factor with the OSPE supports validity. This paper highlights the importance of validating an MSF instrument within the specialty-specific context as, in addition to assuring content validity, the PATH-SPRAT (Histopathology-Sheffield Peer Review Assessment Tool) also demonstrates the potential to inform training as part of a quality improvement model.
Runtime Simulation for Post-Disaster Data Fusion Visualization
2006-10-01
Center for Multisource Information Fusion ( CMIF ) The State University of New York at Buffalo Buffalo, NY 14260 USA kesh@eng.buffalo.edu ABSTRACT...Fusion ( CMIF ) The State University of New York at Buffalo Buffalo, NY 14260 USA 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING
ERIC Educational Resources Information Center
Frederiksen, H. Allan
In the belief that "the spread of technological development and the attendant rapidly changing environment creates the necessity for multi-source feedback systems to maximize the alternatives available in dealing with global problems," the author shows how to participate in the process of alternate video. He offers detailed information…
Research on multi-source image fusion technology in haze environment
NASA Astrophysics Data System (ADS)
Ma, GuoDong; Piao, Yan; Li, Bing
2017-11-01
In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.
Sadick, Neil S; Sato, Masaki; Palmisano, Diana; Frank, Ido; Cohen, Hila; Harth, Yoram
2011-10-01
Acne scars are one of the most difficult disorders to treat in dermatology. The optimal treatment system will provide minimal downtime resurfacing for the epidermis and non-ablative deep volumetric heating for collagen remodeling in the dermis. A novel therapy system (EndyMed Ltd., Cesarea, Israel) uses phase-controlled multi-source radiofrequency (RF) to provide simultaneous one pulse microfractional resurfacing with simultaneous volumetric skin tightening. The study included 26 subjects (Fitzpatrick's skin type 2-5) with moderate to severe wrinkles and 4 subjects with depressed acne scars. Treatment was repeated each month up to a total of three treatment sessions. Patients' photographs were graded according to accepted scales by two uninvolved blinded evaluators. Significant reduction in the depth of wrinkles and acne scars was noted 4 weeks after therapy with further improvement at the 3-month follow-up. Our data show the histological impact and clinical beneficial effects of simultaneous RF fractional microablation and volumetric deep dermal heating for the treatment of wrinkles and acne scars.
Multisource feedback, human capital, and the financial performance of organizations.
Kim, Kyoung Yong; Atwater, Leanne; Patel, Pankaj C; Smither, James W
2016-11-01
We investigated the relationship between organizations' use of multisource feedback (MSF) programs and their financial performance. We proposed a moderated mediation framework in which the employees' ability and knowledge sharing mediate the relationship between MSF and organizational performance and the purpose for which MSF is used moderates the relationship of MSF with employees' ability and knowledge sharing. With a sample of 253 organizations representing 8,879 employees from 2005 to 2007 in South Korea, we found that MSF had a positive effect on organizational financial performance via employees' ability and knowledge sharing. We also found that when MSF was used for dual purpose (both administrative and developmental purposes), the relationship between MSF and knowledge sharing was stronger, and this interaction carried through to organizational financial performance. However, the purpose of MSF did not moderate the relationship between MSF and employees' ability. The theoretical relevance and practical implications of the findings are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Multi-source recruitment strategies for advancing addiction recovery research beyond treated samples
Subbaraman, Meenakshi Sabina; Laudet, Alexandre B.; Ritter, Lois A.; Stunz, Aina; Kaskutas, Lee Ann
2014-01-01
Background The lack of established sampling frames makes reaching individuals in recovery from substance problems difficult. Although general population studies are most generalizable, the low prevalence of individuals in recovery makes this strategy costly and inefficient. Though more efficient, treatment samples are biased. Aims To describe multi-source recruitment for capturing participants from heterogeneous pathways to recovery; assess which sources produced the most respondents within subgroups; and compare treatment and non-treatment samples to address generalizability. Results Family/friends, Craigslist, social media and non-12-step groups produced the most respondents from hard-to-reach groups, such as racial minorities and treatment-naïve individuals. Recovery organizations yielded twice as many African-Americans and more rural dwellers, while social media yielded twice as many young people than other sources. Treatment samples had proportionally fewer females and older individuals compared to non-treated samples. Conclusions Future research on recovery should utilize previously neglected recruiting strategies to maximize the representativeness of samples. PMID:26166909
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gastelum, Zoe N.; Whitney, Paul D.; White, Amanda M.
2013-07-15
Pacific Northwest National Laboratory has spent several years researching, developing, and validating large Bayesian network models to support integration of open source data sets for nuclear proliferation research. Our current work focuses on generating a set of interrelated models for multi-source assessment of nuclear programs, as opposed to a single comprehensive model. By using this approach, we can break down the models to cover logical sub-problems that can utilize different expertise and data sources. This approach allows researchers to utilize the models individually or in combination to detect and characterize a nuclear program and identify data gaps. The models operatemore » at various levels of granularity, covering a combination of state-level assessments with more detailed models of site or facility characteristics. This paper will describe the current open source-driven, nuclear nonproliferation models under development, the pros and cons of the analytical approach, and areas for additional research.« less
NASA Astrophysics Data System (ADS)
Yin, Baoquan
2018-02-01
A new type of combined cooling, heating and power of photovoltaic radiant panel (PV/R) module was proposed, and applied in the zero energy buildings in this paper. The energy system of this building is composed of PV/R module, low temperature difference terminal, energy storage, multi-source heat pump, energy balance control system. Radiant panel is attached on the backside of the PV module for cooling the PV, which is called PV/R module. During the daytime, the PV module was cooled down with the radiant panel, as the temperature coefficient influence, the power efficiency was increased by 8% to 14%, the radiant panel solar heat collecting efficiency was about 45%. Through the nocturnal radiant cooling, the PV/R cooling capacity could be 50 W/m2. For the multifunction energy device, the system shows the versatility during the heating, cooling and power used of building utilization all year round.
The Cost of Ménière's Disease: A Novel Multisource Approach.
Tyrrell, Jessica; Whinney, David J; Taylor, Timothy
2016-01-01
To estimate the annual cost of Ménière's disease and the cost per person in the UK population and to investigate the direct and indirect costs of the condition. The authors utilized a multidata approach to provide the first estimate of the cost of Ménière's. Data from the UK Biobank (a study of 500,000 individuals collected between 2007 and 2012), the Hospital Episode Statistics (data on all hospital admissions in England from 2008 to 2012) and the UK Ménière's Society (2014) were used to estimate the cost of Ménière's. Cases were self-reported in the UK Biobank and UK Ménière's Society, within the Hospital Episode Statistics cases were clinician diagnosed. The authors estimated the direct and indirect costs of the condition, using count data to represent numbers of individuals reporting specific treatments, operations etc. and basic statistical analyses (χ tests, linear and logistic regression) to compare cases and controls in the UK Biobank. Ménière's was estimated to cost between £541.30 million and £608.70 million annually (equivalent to US $829.9 to $934.2 million), equating to £3,341 to £3,757 ($5112 to $5748) per person per annum. The indirect costs were substantial, with loss of earnings contributing to over £400 million per annum. For the first time, the authors were able to estimate the economic burden of Ménière's disease. In the UK, the annual cost of this condition is substantial. Further research is required to develop cost-effective treatments and management strategies for Ménière's to reduce the economic burden of the disease. These findings should be interpreted with caution due to the uncertainties inherent in the analysis.
Multi-source and ontology-based retrieval engine for maize mutant phenotypes
USDA-ARS?s Scientific Manuscript database
In the midst of this genomics era, major plant genome databases are collecting massive amounts of heterogeneous information, including sequence data, gene product information, images of mutant phenotypes, etc., as well as textual descriptions of many of these entities. While basic browsing and sear...
Foundational Technologies for Activity-Based Intelligence - A Review of the Literature
2014-02-01
academic community. The Center for Multisource Information Fusion ( CMIF ) at the University at Buffalo, Harvard University, and the University of...depth of researchers conducting high-value Multi-INT research; these efforts 26 are delivering high-value research outcomes, e.g., [46-47]. CMIF
Satisfaction Formation Processes in Library Users: Understanding Multisource Effects
ERIC Educational Resources Information Center
Shi, Xi; Holahan, Patricia J.; Jurkat, M. Peter
2004-01-01
This study explores whether disconfirmation theory can explain satisfaction formation processes in library users. Both library users' needs and expectations are investigated as disconfirmation standards. Overall library user satisfaction is predicted to be a function of two independent sources--satisfaction with the information product received…
Velpuri, Naga Manohar; Senay, Gabriel B.
2012-01-01
Lake Turkana, the largest desert lake in the world, is fed by ungauged or poorly gauged river systems. To meet the demand of electricity in the East African region, Ethiopia is currently building the Gibe III hydroelectric dam on the Omo River, which supplies more than 80% of the inflows to Lake Turkana. On completion, the Gibe III dam will be the tallest dam in Africa with a height of 241 m. However, the nature of interactions and potential impacts of regulated inflows to Lake Turkana are not well understood due to its remote location and unavailability of reliable in-situ datasets. In this study, we used 12 years (1998–2009) of existing multi-source satellite and model-assimilated global weather data. We use calibrated multi-source satellite data-driven water balance model for Lake Turkana that takes into account model routed runoff, lake/reservoir evapotranspiration, direct rain on lakes/reservoirs and releases from the dam to compute lake water levels. The model evaluates the impact of Gibe III dam using three different approaches such as (a historical approach, a knowledge-based approach, and a nonparametric bootstrap resampling approach) to generate rainfall-runoff scenarios. All the approaches provided comparable and consistent results. Model results indicated that the hydrological impact of the dam on Lake Turkana would vary with the magnitude and distribution of rainfall post-dam commencement. On average, the reservoir would take up to 8–10 months, after commencement, to reach a minimum operation level of 201 m depth of water. During the dam filling period, the lake level would drop up to 2 m (95% confidence) compared to the lake level modelled without the dam. The lake level variability caused by regulated inflows after the dam commissioning were found to be within the natural variability of the lake of 4.8 m. Moreover, modelling results indicated that the hydrological impact of the Gibe III dam would depend on the initial lake level at the time of dam commencement. Areas along the Lake Turkana shoreline that are vulnerable to fluctuations in lake levels were also identified. This study demonstrates the effectiveness of using existing multi-source satellite data in a basic modeling framework to assess the potential hydrological impact of an upstream dam on a terminal downstream lake. The results obtained from this study could also be used to evaluate alternate dam-filling scenarios and assess the potential impact of the dam on Lake Turkana under different operational strategies.
Danielson, Patrick; Yang, Limin; Jin, Suming; Homer, Collin G.; Napton, Darrell
2016-01-01
We developed a method that analyzes the quality of the cultivated cropland class mapped in the USA National Land Cover Database (NLCD) 2006. The method integrates multiple geospatial datasets and a Multi Index Integrated Change Analysis (MIICA) change detection method that captures spectral changes to identify the spatial distribution and magnitude of potential commission and omission errors for the cultivated cropland class in NLCD 2006. The majority of the commission and omission errors in NLCD 2006 are in areas where cultivated cropland is not the most dominant land cover type. The errors are primarily attributed to the less accurate training dataset derived from the National Agricultural Statistics Service Cropland Data Layer dataset. In contrast, error rates are low in areas where cultivated cropland is the dominant land cover. Agreement between model-identified commission errors and independently interpreted reference data was high (79%). Agreement was low (40%) for omission error comparison. The majority of the commission errors in the NLCD 2006 cultivated crops were confused with low-intensity developed classes, while the majority of omission errors were from herbaceous and shrub classes. Some errors were caused by inaccurate land cover change from misclassification in NLCD 2001 and the subsequent land cover post-classification process.
Li, Alex Ning; Liao, Hui
2014-09-01
Integrating leader-member exchange (LMX) research with role engagement theory (Kahn, 1990) and role system theory (Katz & Kahn, 1978), we propose a multilevel, dual process model to understand the mechanisms through which LMX quality at the individual level and LMX differentiation at the team level simultaneously affect individual and team performance. With regard to LMX differentiation, we introduce a new configural approach focusing on the pattern of LMX differentiation to complement the traditional approach focusing on the degree of LMX differentiation. Results based on multiphase, multisource data from 375 employees of 82 teams revealed that, at the individual level, LMX quality positively contributed to customer-rated employee performance through enhancing employee role engagement. At the team level, LMX differentiation exerted negative influence on teams' financial performance through disrupting team coordination. In particular, teams with the bimodal form of LMX configuration (i.e., teams that split into 2 LMX-based subgroups with comparable size) suffered most in team performance because they experienced greatest difficulty in coordinating members' activities. Furthermore, LMX differentiation strengthened the relationship between LMX quality and role engagement, and team coordination strengthened the relationship between role engagement and employee performance. Theoretical and practical implications of the findings are discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Multisource Data-Based Integrated Agricultural Drought Monitoring in the Huai River Basin, China
NASA Astrophysics Data System (ADS)
Sun, Peng; Zhang, Qiang; Wen, Qingzhi; Singh, Vijay P.; Shi, Peijun
2017-10-01
Drought monitoring is critical for early warning of drought hazard. This study attempted to develop an integrated remote sensing drought monitoring index (IRSDI), based on meteorological data for 2003-2013 from 40 meteorological stations and soil moisture data from 16 observatory stations, as well as Moderate Resolution Imaging Spectroradiometer data using a linear trend detection method, and standardized precipitation evapotranspiration index. The objective was to investigate drought conditions across the Huai River basin in both space and time. Results indicate that (1) the proposed IRSDI monitors and describes drought conditions across the Huai River basin reasonably well in both space and time; (2) frequency of drought and severe drought are observed during April-May and July-September. The northeastern and eastern parts of Huai River basin are dominated by frequent droughts and intensified drought events. These regions are dominated by dry croplands, grasslands, and highly dense population and are hence more sensitive to drought hazards; (3) intensified droughts are detected during almost all months except January, August, October, and December. Besides, significant intensification of droughts is discerned mainly in eastern and western Huai River basin. The duration and regions dominated by intensified drought events would be a challenge for water resources management in view of agricultural and other activities in these regions in a changing climate.
Crop 3D-a LiDAR based platform for 3D high-throughput crop phenotyping.
Guo, Qinghua; Wu, Fangfang; Pang, Shuxin; Zhao, Xiaoqian; Chen, Linhai; Liu, Jin; Xue, Baolin; Xu, Guangcai; Li, Le; Jing, Haichun; Chu, Chengcai
2018-03-01
With the growing population and the reducing arable land, breeding has been considered as an effective way to solve the food crisis. As an important part in breeding, high-throughput phenotyping can accelerate the breeding process effectively. Light detection and ranging (LiDAR) is an active remote sensing technology that is capable of acquiring three-dimensional (3D) data accurately, and has a great potential in crop phenotyping. Given that crop phenotyping based on LiDAR technology is not common in China, we developed a high-throughput crop phenotyping platform, named Crop 3D, which integrated LiDAR sensor, high-resolution camera, thermal camera and hyperspectral imager. Compared with traditional crop phenotyping techniques, Crop 3D can acquire multi-source phenotypic data in the whole crop growing period and extract plant height, plant width, leaf length, leaf width, leaf area, leaf inclination angle and other parameters for plant biology and genomics analysis. In this paper, we described the designs, functions and testing results of the Crop 3D platform, and briefly discussed the potential applications and future development of the platform in phenotyping. We concluded that platforms integrating LiDAR and traditional remote sensing techniques might be the future trend of crop high-throughput phenotyping.
On Meaningful Measurement: Concepts, Technology and Examples.
ERIC Educational Resources Information Center
Cheung, K. C.
This paper discusses how concepts and procedural skills in problem-solving tasks, as well as affects and emotions, can be subjected to meaningful measurement (MM), based on a multisource model of learning and a constructivist information-processing theory of knowing. MM refers to the quantitative measurement of conceptual and procedural knowledge…
Efficient Multi-Source Data Fusion for Decentralized Sensor Networks
2006-10-01
Operating Picture (COP). Robovolc, accessing a single DDF node associated with a CCTV camera (marked in orange in Figure 3a), defends a ‘ sensitive ...Gaussian environments. Figure 10: Particle Distribution Snapshots osition error between each target and the me ed particle set at the bearing-only
Data Mining Algorithms for Classification of Complex Biomedical Data
ERIC Educational Resources Information Center
Lan, Liang
2012-01-01
In my dissertation, I will present my research which contributes to solve the following three open problems from biomedical informatics: (1) Multi-task approaches for microarray classification; (2) Multi-label classification of gene and protein prediction from multi-source biological data; (3) Spatial scan for movement data. In microarray…
Directed Vapor Deposition: Low Vacuum Materials Processing Technology
2000-01-01
constituent A Crucible with constituent B Electron beam AB Substrate Deposit Flux of A Flux of B Composition "Skull" melt Electron beam Coolant Copper ... crucible Evaporation target Evaporant material Vapor flux Fibrous Coating Surface a) b) sharp (0.5 mm) beam focussing. When used with multisource
Evaluation of Professional Role Competency during Psychiatry Residency
ERIC Educational Resources Information Center
Grujich, Nikola N.; Razmy, Ajmal; Zaretsky, Ari; Styra, Rima G.; Sockalingam, Sanjeev
2012-01-01
Objective: The authors sought to determine psychiatry residents' perceptions on the current method of evaluating professional role competency and the use of multi-source feedback (MSF) as an assessment tool. Method: Authors disseminated a structured, anonymous survey to 128 University of Toronto psychiatry residents, evaluating the current mode of…
Cross-Modulation Interference with Lateralization of Mixed-Modulated Waveforms
ERIC Educational Resources Information Center
Hsieh, I-Hui; Petrosyan, Agavni; Goncalves, Oscar F.; Hickok, Gregory; Saberi, Kourosh
2010-01-01
Purpose: This study investigated the ability to use spatial information in mixed-modulated (MM) sounds containing concurrent frequency-modulated (FM) and amplitude-modulated (AM) sounds by exploring patterns of interference when different modulation types originated from different loci as may occur in a multisource acoustic field. Method:…
The Effect of Surgeon Empathy and Emotional Intelligence on Patient Satisfaction
ERIC Educational Resources Information Center
Weng, Hui-Ching; Steed, James F.; Yu, Shang-Won; Liu, Yi-Ten; Hsu, Chia-Chang; Yu, Tsan-Jung; Chen, Wency
2011-01-01
We investigated the associations of surgeons' emotional intelligence and surgeons' empathy with patient-surgeon relationships, patient perceptions of their health, and patient satisfaction before and after surgical procedures. We used multi-source approaches to survey 50 surgeons and their 549 outpatients during initial and follow-up visits.…
Single Mothers of Early Adolescents: Perceptions of Competence
ERIC Educational Resources Information Center
Beckert, Troy E.; Strom, Paris S.; Strom, Robert D.; Darre, Kathryn; Weed, Ane
2008-01-01
The purpose of this study was to examine similarities and differences in single mothers' and adolescents' perceptions of parenting competencies from a developmental assets approach. A multi-source (mothers [n = 29] and 10-14-year-old adolescent children [n = 29]), single-method (both generations completed the Parent Success Indicator)…
Fusion or confusion: knowledge or nonsense?
NASA Astrophysics Data System (ADS)
Rothman, Peter L.; Denton, Richard V.
1991-08-01
The terms 'data fusion,' 'sensor fusion,' multi-sensor integration,' and 'multi-source integration' have been used widely in the technical literature to refer to a variety of techniques, technologies, systems, and applications which employ and/or combine data derived from multiple information sources. Applications of data fusion range from real-time fusion of sensor information for the navigation of mobile robots to the off-line fusion of both human and technical strategic intelligence data. The Department of Defense Critical Technologies Plan lists data fusion in the highest priority group of critical technologies, but just what is data fusion? The DoD Critical Technologies Plan states that data fusion involves 'the acquisition, integration, filtering, correlation, and synthesis of useful data from diverse sources for the purposes of situation/environment assessment, planning, detecting, verifying, diagnosing problems, aiding tactical and strategic decisions, and improving system performance and utility.' More simply states, sensor fusion refers to the combination of data from multiple sources to provide enhanced information quality and availability over that which is available from any individual source alone. This paper presents a survey of the state-of-the- art in data fusion technologies, system components, and applications. A set of characteristics which can be utilized to classify data fusion systems is presented. Additionally, a unifying mathematical and conceptual framework within which to understand and organize fusion technologies is described. A discussion of often overlooked issues in the development of sensor fusion systems is also presented.
NASA Astrophysics Data System (ADS)
Liu, G.; Wu, C.; Li, X.; Song, P.
2013-12-01
The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.
NASA Astrophysics Data System (ADS)
Taubenböck, H.; Wurm, M.; Netzband, M.; Zwenzner, H.; Roth, A.; Rahman, A.; Dech, S.
2011-02-01
Estimating flood risks and managing disasters combines knowledge in climatology, meteorology, hydrology, hydraulic engineering, statistics, planning and geography - thus a complex multi-faceted problem. This study focuses on the capabilities of multi-source remote sensing data to support decision-making before, during and after a flood event. With our focus on urbanized areas, sample methods and applications show multi-scale products from the hazard and vulnerability perspective of the risk framework. From the hazard side, we present capabilities with which to assess flood-prone areas before an expected disaster. Then we map the spatial impact during or after a flood and finally, we analyze damage grades after a flood disaster. From the vulnerability side, we monitor urbanization over time on an urban footprint level, classify urban structures on an individual building level, assess building stability and quantify probably affected people. The results show a large database for sustainable development and for developing mitigation strategies, ad-hoc coordination of relief measures and organizing rehabilitation.
Hydropower assessment of Bolivia—A multisource satellite data and hydrologic modeling approach
Velpuri, Naga Manohar; Pervez, Shahriar; Cushing, W. Matthew
2016-11-28
This study produced a geospatial database for use in a decision support system by the Bolivian authorities to investigate further development and investment potentials in sustainable hydropower in Bolivia. The study assessed theoretical hydropower of all 1-kilometer (km) stream segments in the country using multisource satellite data and a hydrologic modeling approach. With the assessment covering the 2 million square kilometer (km2) region influencing Bolivia’s drainage network, the potential hydropower figures are based on theoretical yield assuming that the systems generating the power are 100 percent efficient. There are several factors to consider when determining the real-world or technical power potential of a hydropower system, and these factors can vary depending on local conditions. Since this assessment covers a large area, it was necessary to reduce these variables to the two that can be modeled consistently throughout the region, streamflow or discharge, and elevation drop or head. First, the Shuttle Radar Topography Mission high-resolution 30-meter (m) digital elevation model was used to identify stream segments with greater than 10 km2 of upstream drainage. We applied several preconditioning processes to the 30-m digital elevation model to reduce errors and improve the accuracy of stream delineation and head height estimation. A total of 316,500 1-km stream segments were identified and used in this study to assess the total theoretical hydropower potential of Bolivia. Precipitation observations from a total of 463 stations obtained from the Bolivian Servicio Nacional de Meteorología e Hidrología (Bolivian National Meteorology and Hydrology Service) and the Brazilian Agência Nacional de Águas (Brazilian National Water Agency) were used to validate six different gridded precipitation estimates for Bolivia obtained from various sources. Validation results indicated that gridded precipitation estimates from the Tropical Rainfall Measuring Mission (TRMM) reanalysis product (3B43) had the highest accuracies. The coarse-resolution (25-km) TRMM data were disaggregated to 5-km pixels using climatology information obtained from the Climate Hazards Group Infrared Precipitation with Stations dataset. About a 17-percent bias was observed in the disaggregated TRMM estimates, which was corrected using the station observations. The bias-corrected, disaggregated TRMM precipitation estimate was used to compute stream discharge using a regionalization approach. In regionalization approach, required homogeneous regions for Bolivia were derived from precipitation patterns and topographic characteristics using a k-means clustering approach. Using the discharge and head height estimates for each 1-km stream segment, we computed hydropower potential for 316,490 stream segments within Bolivia and that share borders with Bolivia. The total theoretical hydropower potential (TTHP) of these stream segments was found to be 212 gigawatts (GW). Out of this total, 77.4 GW was within protected areas where hydropower projects cannot be developed; hence, the remaining total theoretical hydropower in Bolivia (outside the protected areas) was estimated as 135 GW. Nearly 1,000 1-km stream segments, however, were within the boundaries of existing hydropower projects. The TTHP of these stream segments was nearly 1.4 GW, so the residual TTHP of the streams in Bolivia was estimated as 133 GW. Care should be exercised to understand and interpret the TTHP identified in this study because all the stream segments identified and assessed in this study cannot be harnessed to their full capacity; furthermore, factors such as required environmental flows, efficiency, economics, and feasibility need to be considered to better identify a more real-world hydropower potential. If environmental flow requirements of 20–40 percent are considered, the total theoretical power available reduces by 60–80 percent. In addition, a 0.72 efficiency factor further reduces the estimation by another 28 percent. This study provides the base theoretical hydropower potential for Bolivia, the next step is to identify optimal hydropower plant locations and factor in the principles to appraise a real-world power potential in Bolivia.
Historical and future changes of frozen ground in the upper Yellow River Basin
NASA Astrophysics Data System (ADS)
Wang, Taihua; Yang, Dawen; Qin, Yue; Wang, Yuhan; Chen, Yun; Gao, Bing; Yang, Hanbo
2018-03-01
Frozen ground degradation resulting from climate warming on the Tibetan Plateau has aroused wide concern in recent years. In this study, the maximum thickness of seasonally frozen ground (MTSFG) is estimated by the Stefan equation, which is validated using long-term frozen depth observations. The permafrost distribution is estimated by the temperature at the top of permafrost (TTOP) model, which is validated using borehole observations. The two models are applied to the upper Yellow River Basin (UYRB) for analyzing the spatio-temporal changes in frozen ground. The simulated results show that the areal mean MTSFG in the UYRB decreased by 3.47 cm/10 a during 1965-2014, and that approximately 23% of the permafrost in the UYRB degraded to seasonally frozen ground during the past 50 years. Using the climate data simulated by 5 General Circulation Models (GCMs) under the Representative Concentration Pathway (RCP) 4.5, the areal mean MTSFG is projected to decrease by 1.69 to 3.07 cm/10 a during 2015-2050, and approximately 40% of the permafrost in 1991-2010 is projected to degrade into seasonally frozen ground in 2031-2050. This study provides a framework to estimate the long-term changes in frozen ground based on a combination of multi-source observations at the basin scale, and this framework can be applied to other areas of the Tibetan Plateau. The estimates of frozen ground changes could provide a scientific basis for water resource management and ecological protection under the projected future climate changes in headwater regions on the Tibetan Plateau.
Butler, Ainslie J; Thomas, M Kate; Pintar, Katarina D M
2015-04-01
Enteric illness contributes to a significant burden of illness in Canada and globally. Understanding its sources is a critical step in identifying and preventing health risks. Expert elicitation is a powerful tool, used previously, to obtain information about enteric illness source attribution where information is difficult or expensive to obtain. Thirty-one experts estimated transmission of 28 pathogens via major transmission routes (foodborne, waterborne, animal contact, person-to-person, and other) at the point of consumption. The elicitation consisted of a (snowball) recruitment phase; administration of a pre-survey to collect background information, an introductory webinar, an elicitation survey, a 1-day discussion, survey readministration, and a feedback exercise, and surveys were administered online. Experts were prompted to quantify changes in contamination at the point of entry into the kitchen versus point of consumption. Estimates were combined via triangular probability distributions, and medians and 90% credible-interval estimates were produced. Transmission was attributed primarily to food for Bacillus cereus, Clostridium perfringens, Cyclospora cayetanensis, Trichinella spp., all three Vibrio spp. categories explored, and Yersinia enterocolitica. Multisource pathogens (e.g., transmitted commonly through both water and food) such as Campylobacter spp., four Escherichia coli categories, Listeria monocytogenes, Salmonella spp., and Staphylococcus aureus were also estimated as mostly foodborne. Water was the primary pathway for Giardia spp. and Cryptosporidium spp., and person-to-person transmission dominated for six enteric viruses and Shigella spp. Consideration of the point of attribution highlighted the importance of food handling and cross-contamination in the transmission pathway. This study provides source attribution estimates of enteric illness for Canada, considering all possible transmission routes. Further research is necessary to improve our understanding of poorly characterized pathogens such as sapovirus and E. coli subgroups in Canada.
A multi-source precipitation approach to fill gaps over a radar precipitation field
NASA Astrophysics Data System (ADS)
Tesfagiorgis, K. B.; Mahani, S. E.; Khanbilvardi, R.
2012-12-01
Satellite Precipitation Estimates (SPEs) may be the only available source of information for operational hydrologic and flash flood prediction due to spatial limitations of radar and gauge products. The present work develops an approach to seamlessly blend satellite, radar, climatological and gauge precipitation products to fill gaps over ground-based radar precipitation fields. To mix different precipitation products, the bias of any of the products relative to each other should be removed. For bias correction, the study used an ensemble-based method which aims to estimate spatially varying multiplicative biases in SPEs using a radar rainfall product. Bias factors were calculated for a randomly selected sample of rainy pixels in the study area. Spatial fields of estimated bias were generated taking into account spatial variation and random errors in the sampled values. A weighted Successive Correction Method (SCM) is proposed to make the merging between error corrected satellite and radar rainfall estimates. In addition to SCM, we use a Bayesian spatial method for merging the gap free radar with rain gauges, climatological rainfall sources and SPEs. We demonstrate the method using SPE Hydro-Estimator (HE), radar- based Stage-II, a climatological product PRISM and rain gauge dataset for several rain events from 2006 to 2008 over three different geographical locations of the United States. Results show that: the SCM method in combination with the Bayesian spatial model produced a precipitation product in good agreement with independent measurements. The study implies that using the available radar pixels surrounding the gap area, rain gauge, PRISM and satellite products, a radar like product is achievable over radar gap areas that benefits the scientific community.
Yu, Hwa-Lung; Chiang, Chi-Ting; Lin, Shu-De; Chang, Tsun-Kuo
2010-02-01
Incidence rate of oral cancer in Changhua County is the highest among the 23 counties of Taiwan during 2001. However, in health data analysis, crude or adjusted incidence rates of a rare event (e.g., cancer) for small populations often exhibit high variances and are, thus, less reliable. We proposed a generalized Bayesian Maximum Entropy (GBME) analysis of spatiotemporal disease mapping under conditions of considerable data uncertainty. GBME was used to study the oral cancer population incidence in Changhua County (Taiwan). Methodologically, GBME is based on an epistematics principles framework and generates spatiotemporal estimates of oral cancer incidence rates. In a way, it accounts for the multi-sourced uncertainty of rates, including small population effects, and the composite space-time dependence of rare events in terms of an extended Poisson-based semivariogram. The results showed that GBME analysis alleviates the noises of oral cancer data from population size effect. Comparing to the raw incidence data, the maps of GBME-estimated results can identify high risk oral cancer regions in Changhua County, where the prevalence of betel quid chewing and cigarette smoking is relatively higher than the rest of the areas. GBME method is a valuable tool for spatiotemporal disease mapping under conditions of uncertainty. 2010 Elsevier Inc. All rights reserved.
Scaling isotopic emissions and microbes across a permafrost thaw landscape
NASA Astrophysics Data System (ADS)
Varner, R. K.; Palace, M. W.; Saleska, S. R.; Bolduc, B.; Braswell, B. H., Jr.; Crill, P. M.; Chanton, J.; DelGreco, J.; Deng, J.; Frolking, S. E.; Herrick, C.; Hines, M. E.; Li, C.; McArthur, K. J.; McCalley, C. K.; Persson, A.; Roulet, N. T.; Torbick, N.; Tyson, G. W.; Rich, V. I.
2017-12-01
High latitude peatlands are a significant source of atmospheric methane. This source is spatially and temporally heterogeneous, resulting in a wide range of emission estimates for the atmospheric budget. Increasing atmospheric temperatures are causing degradation of underlying permafrost, creating changes in surface soil moisture, the surface and sub-surface hydrological patterns, vegetation and microbial communities, but the consequences to rates and magnitudes of methane production and emissions are poorly accounted for in global budgets. We combined field observations, multi-source remote sensing data and biogeochemical modeling to predict methane dynamics, including the fraction derived from hydrogenotrophic versus acetoclastic microbial methanogenesis across Stordalen mire, a heterogeneous discontinuous permafrost wetland located in northernmost Sweden. Using the field measurement validated Wetland-DNDC biogeochemical model, we estimated mire-wide CH4 and del13CH4 production and emissions for 2014 with input from field and unmanned aerial system (UAS) image derived vegetation maps, local climatology and water table from insitu and remotely sensed data. Model simulated methanogenic pathways correlate with sequence-based observations of methanogen community composition in samples collected from across the permafrost thaw landscape. This approach enables us to link below ground microbial community composition with emissions and indicates a potential for scaling across broad areas of the Arctic region.
NASA Astrophysics Data System (ADS)
Snauffer, Andrew M.; Hsieh, William W.; Cannon, Alex J.; Schnorbus, Markus A.
2018-03-01
Estimates of surface snow water equivalent (SWE) in mixed alpine environments with seasonal melts are particularly difficult in areas of high vegetation density, topographic relief, and snow accumulations. These three confounding factors dominate much of the province of British Columbia (BC), Canada. An artificial neural network (ANN) was created using as predictors six gridded SWE products previously evaluated for BC. Relevant spatiotemporal covariates were also included as predictors, and observations from manual snow surveys at stations located throughout BC were used as target data. Mean absolute errors (MAEs) and interannual correlations for April surveys were found using cross-validation. The ANN using the three best-performing SWE products (ANN3) had the lowest mean station MAE across the province. ANN3 outperformed each product as well as product means and multiple linear regression (MLR) models in all of BC's five physiographic regions except for the BC Plains. Subsequent comparisons with predictions generated by the Variable Infiltration Capacity (VIC) hydrologic model found ANN3 to better estimate SWE over the VIC domain and within most regions. The superior performance of ANN3 over the individual products, product means, MLR, and VIC was found to be statistically significant across the province.
Global Ocean Evaporation Increases Since 1960 in Climate Reanalyses: How Accurate Are They?
NASA Technical Reports Server (NTRS)
Robertson, Franklin R.; Roberts, Jason B.; Bosilovich, Michael G.
2016-01-01
AGCMs w/ Specified SSTs (AMIPs) GEOS-5, ERA-20CM Ensembles Incorporate best historical estimates of SST, sea ice, radiative forcing Atmospheric "weather noise" is inconsistent with specified SST. Instantaneous Sfc fluxes can be wrong sign (e.g. Indian Ocean Monsoon, high latitude oceans). Averaging over ensemble members helps isolate SST-forced signal. Reduced Observational Reanalyses: NOAA 20CR V2C, ERA-20C, JRA-55C Incorporate observed Sfc Press (20CR), Marine Winds (ERA-20C) and rawinsondes (JRA-55C) to recover much of true synoptic or weather w/o shock of new sat obs. Comprehensive Reanalyses (MERRA-2) Full suite of observational constraints- both conventional and remote sensing. But... substantial uncertainties owing to evolving satellite observing system. Multi-source Statistically Blended OAFlux, LargeYeager Blend reanalysis, satellite, and ocean buoy information. While climatological biases are removed, non-physical trends or variations in components remain. Satellite Retrievals GSSTF3, SeaFlux, HOAPS3... Global coverage. Retrieved near sfc wind speed, & humidity used with SST to drive accurate bulk aerodynamic flux estimates. Satellite inter-calibration, spacecraft pointing variations crucial. Short record ( late 1987-present). In situ Measurements ICOADS, IVAD, Res Cruises VOS and buoys offer direct measurements. Sparse data coverage (esp south of 30S. Changes in measurement techniques (e.g. shipboard anemometer height).
ERIC Educational Resources Information Center
Blackman, Gabrielle L.; Ostrander, Rick; Herman, Keith C.
2005-01-01
Although ADHD and depression are common comorbidities in youth, few studies have examined this particular clinical presentation. To address method bias limitations of previous research, this study uses multiple informants to compare the academic, social, and clinical functioning of children with ADHD, children with ADHD and depression, and…
ERIC Educational Resources Information Center
Powers, Joshua B.
This study investigated institutional resource factors that may explain differential performance with university technology transfer--the process by which university research is transformed into marketable products. Using multi-source data on 108 research universities, a set of internal resources (financial, physical, human capital, and…
ERIC Educational Resources Information Center
Sargeant, Joan; MacLeod, Tanya; Sinclair, Douglas; Power, Mary
2011-01-01
Introduction: The Colleges of Physicians and Surgeons of Alberta and Nova Scotia (CPSNS) use a standardized multisource feedback program, the Physician Achievement Review (PAR/NSPAR), to provide physicians with performance assessment data via questionnaires from medical colleagues, coworkers, and patients on 5 practice domains: consultation…
ERIC Educational Resources Information Center
Lans, Thomas; Biemans, Harm; Mulder, Martin; Verstegen, Jos
2010-01-01
An important assumption of entrepreneurial competence is that (at least part of) it can be learned and developed. However, human resources development (HRD) practices aimed at further strengthening and developing small-business owner-managers' entrepreneurial competence are complex and underdeveloped. A multisource assessment of owner-managers'…
WHO Expert Committee on Specifications for Pharmaceutical Preparations. Forty-ninth report.
2015-01-01
The Expert Committee on Specifications for Pharmaceutical Preparations works towards clear, independent and practical standards and guidelines for the quality assurance of medicines. Standards are developed by the Committee through worldwide consultation and an international consensus-building process. The following new guidelines were adopted and recommended for use. Revised procedure for the development of monographs and other texts for The International Pharmacopoeia; Revised updating mechanism for the section on radiopharmaceuticals in The International Pharmacopoeia; Revision of the supplementary guidelines on good manufacturing practices: validation, Appendix 7: non-sterile process validation; General guidance for inspectors on hold-time studies; 16 technical supplements to Model guidance for the storage and transport of time- and temperature-sensitive pharmaceutical products; Recommendations for quality requirements when plant-derived artemisinin is used as a starting material in the production of antimalarial active pharmaceutical ingredients; Multisource (generic) pharmaceutical products: guidelines on registration requirements to establish interchangeability: revision; Guidance on the selection of comparator pharmaceutical products for equivalence assessment of interchangeable multisource (generic) products: revision; and Good review practices: guidelines for national and regional regulatory authorities.
NASA Astrophysics Data System (ADS)
Luo, Qiu; Xin, Wu; Qiming, Xiong
2017-06-01
In the process of vegetation remote sensing information extraction, the problem of phenological features and low performance of remote sensing analysis algorithm is not considered. To solve this problem, the method of remote sensing vegetation information based on EVI time-series and the classification of decision-tree of multi-source branch similarity is promoted. Firstly, to improve the time-series stability of recognition accuracy, the seasonal feature of vegetation is extracted based on the fitting span range of time-series. Secondly, the decision-tree similarity is distinguished by adaptive selection path or probability parameter of component prediction. As an index, it is to evaluate the degree of task association, decide whether to perform migration of multi-source decision tree, and ensure the speed of migration. Finally, the accuracy of classification and recognition of pests and diseases can reach 87%--98% of commercial forest in Dalbergia hainanensis, which is significantly better than that of MODIS coverage accuracy of 80%--96% in this area. Therefore, the validity of the proposed method can be verified.
A novel virtual hub approach for multisource downstream service integration
NASA Astrophysics Data System (ADS)
Previtali, Mattia; Cuca, Branka; Barazzetti, Luigi
2016-08-01
A large development of downstream services is expected to be stimulated starting from earth observations (EO) datasets acquired by Copernicus satellites. An important challenge connected with the availability of downstream services is the possibility for their integration in order to create innovative applications with added values for users of different categories level. At the moment, the world of geo-information (GI) is extremely heterogeneous in terms of standards and formats used, thus preventing a facilitated access and integration of downstream services. Indeed, different users and data providers have also different requirements in terms of communication protocols and technology advancement. In recent years, many important programs and initiatives have tried to address this issue even on trans-regional and international level (e.g. INSPIRE Directive, GEOSS, Eye on Earth and SEIS). However, a lack of interoperability between systems and services still exists. In order to facilitate the interaction between different downstream services, a new architectural approach (developed within the European project ENERGIC OD) is proposed in this paper. The brokering-oriented architecture introduces a new mediation layer (the Virtual Hub) which works as an intermediary to bridge the gaps linked to interoperability issues. This intermediation layer de-couples the server and the client allowing a facilitated access to multiple downstream services and also Open Data provided by national and local SDIs. In particular, in this paper an application is presented integrating four services on the topic of agriculture: (i) the service given by Space4Agri (providing services based on MODIS and Landsat data); (ii) Gicarus Lab (providing sample services based on Landsat datasets) and (iii) FRESHMON (providing sample services for water quality) and services from a several regional SDIs.
NASA Astrophysics Data System (ADS)
Li, D.
2016-12-01
Sudden water pollution accidents are unavoidable risk events that we must learn to co-exist with. In China's Taihu River Basin, the river flow conditions are complicated with frequently artificial interference. Sudden water pollution accident occurs mainly in the form of a large number of abnormal discharge of wastewater, and has the characteristics with the sudden occurrence, the uncontrollable scope, the uncertainty object and the concentrated distribution of many risk sources. Effective prevention of pollution accidents that may occur is of great significance for the water quality safety management. Bayesian networks can be applied to represent the relationship between pollution sources and river water quality intuitively. Using the time sequential Monte Carlo algorithm, the pollution sources state switching model, water quality model for river network and Bayesian reasoning is integrated together, and the sudden water pollution risk assessment model for river network is developed to quantify the water quality risk under the collective influence of multiple pollution sources. Based on the isotope water transport mechanism, a dynamic tracing model of multiple pollution sources is established, which can describe the relationship between the excessive risk of the system and the multiple risk sources. Finally, the diagnostic reasoning algorithm based on Bayesian network is coupled with the multi-source tracing model, which can identify the contribution of each risk source to the system risk under the complex flow conditions. Taking Taihu Lake water system as the research object, the model is applied to obtain the reasonable results under the three typical years. Studies have shown that the water quality risk at critical sections are influenced by the pollution risk source, the boundary water quality, the hydrological conditions and self -purification capacity, and the multiple pollution sources have obvious effect on water quality risk of the receiving water body. The water quality risk assessment approach developed in this study offers a effective tool for systematically quantifying the random uncertainty in plain river network system, and it also provides the technical support for the decision-making of controlling the sudden water pollution through identification of critical pollution sources.
Ni, Li-Jun; Luan, Shao-Rong; Zhang, Li-Guo
2016-10-01
Because of the numerous varieties of herbal species and active ingredients in the traditional Chinese medicine(TCM),the traditional methods employed could hardly satisfy the current determination requirements of TCM.The present work proposed an idea to realize rapid determination of the quality of TCM based on near infrared(NIR)spectroscopy and internet sharing mode. Low cost and portable multi-source composite spectrometer was invented by our group for in-site fast measurement of spectra of TCM samples. The database could be set up by sharing spectra and quality detection data of TCM samples among TCM enterprises based on the internet platform.A novel method called as keeping same relationship between X and Y space based on K nearest neighbors(KNN-KSR for short)was applied to predict the contents of effective compounds of the samples. In addition,a comparative study between KNN-KSR and partial least squares(PLS)was conducted. Two datasets were applied to validate above idea:one was about 58 Ginkgo Folium samples samples measured with four near-infrared spectroscopy instruments and two multi-source composite spectrometers,another one was about 80 corn samples available online measured with three NIR instruments. The results show that the KNN-KSR method could obtain more reliable outcomes without correcting spectrum.However transforming the PLS models to other instruments could hardly acquire better predictive results until spectral calibration is performed. Meanwhile,the similar analysis results of total flavonoids and total lactones of Ginkgo Folium samples are achieved on the multi-source composite spectrometers and near-infrared spectroscopy instruments,and the prediction results of KNN-KSR are better than PLS. The idea proposed in present study is in urgent need of more samples spectra, and then to be verified by more case studies. Copyright© by the Chinese Pharmaceutical Association.
van Ruitenbeek, Gemma M C; Zijlstra, Fred R H; Hülsheger, Ute R
2018-06-04
Purpose Participation in regular paid jobs positively affects mental and physical health of all people, including people with limited work capacities (LWC), people that are limited in their work capacity as a consequence of their disability, such as chronic mental illness, psychological or developmental disorder. For successful participation, a good fit between on one hand persons' capacities and on the other hand well-suited individual support and a suitable work environment is necessary in order to meet the demands of work. However, to date there is a striking paucity of validated measures that indicate the capability to work of people with LWC and that outline directions for support that facilitate the fit. Goal of the present study was therefore to develop such an instrument. Specifically, we adjusted measures of mental ability, conscientiousness, self-efficacy, and coping by simplifying the language level of these measures to make the scales accessible for people with low literacy. In order to validate these adjusted self-report and observer measures we conducted two studies, using multi-source, longitudinal data. Method Study 1 was a longitudinal multi-source study in which the newly developed instrument was administered twice to people with LWC and their significant other. We statistically tested the psychometric properties with respect to dimensionality and reliability. In Study 2, we collected new multi-source data and conducted a confirmatory factor analysis (CFA). Results Studies yielded a congruous factor structure in both samples, internally consistent measures with adequate content validity of scales and subscales, and high test-retest reliability. The CFA confirmed the factorial validity of the scales. Conclusion The adjusted self-report and the observer scales of mental ability, conscientiousness, self-efficacy, and coping are reliable measures that are well-suited to assess the work capability of people with LWC. Further research is needed to examine criterion-related validity with respect to the work demands such as work-behaviour and task performance.
Gregory, Paul J; Robbins, Benjamin; Schwaitzberg, Steven D; Harmon, Larry
2017-09-01
The current research evaluated the potential utility of a 360-degree survey feedback program for measuring leadership quality in potential committee leaders of a professional medical association (PMA). Emotional intelligence as measured by the extent to which self-other agreement existed in the 360-degree survey ratings was explored as a key predictor of leadership quality in the potential leaders. A non-experimental correlational survey design was implemented to assess the variation in leadership quality scores across the sample of potential leaders. A total of 63 of 86 (76%) of those invited to participate did so. All potential leaders received feedback from PMA Leadership, PMA Colleagues, and PMA Staff and were asked to complete self-ratings regarding their behavior. Analyses of variance revealed a consistent pattern of results as Under-Estimators and Accurate Estimators-Favorable were rated significantly higher than Over-Estimators in several leadership behaviors. Emotional intelligence as conceptualized in this study was positively related to overall performance ratings of potential leaders. The ever-increasing roles and potential responsibilities for PMAs suggest that these organizations should consider multisource performance reviews as these potential future PMA executives rise through their organizations to assume leadership positions with profound potential impact on healthcare. The current findings support the notion that potential leaders who demonstrated a humble pattern or an accurate pattern of self-rating scored significantly higher in their leadership, teamwork, and interpersonal/communication skills than those with an aggrandizing self-rating.
Visualizing phylogenetic tree landscapes.
Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A
2017-02-02
Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D projections significantly increase the fit between the tree-to-tree distances and can facilitate the interpretation of the relationship among phylogenetic trees. We demonstrate that the choice of dimensionality reduction method can significantly influence the spatial relationship among a large set of competing phylogenetic trees. We highlight the importance of selecting a dimensionality reduction method to visualize large multi-locus phylogenetic landscapes and demonstrate that 3D projections of mitochondrial tree landscapes better capture the relationship among the trees being compared.
Objective Work-Nonwork Conflict: From Incompatible Demands to Decreased Work Role Performance
ERIC Educational Resources Information Center
Haun, Sascha; Steinmetz, Holger; Dormann, Christian
2011-01-01
Research on work-nonwork conflict (WNC) is based on the assumption that incompatible demands from the work and the nonwork domain hamper role performance. This assumption implies that role demands from both domains interact in predicting role performance, but research has been largely limited to main effects. In this multi-source study, we analyze…
Long-Term Stability of Core Language Skill in Children with Contrasting Language Skills
ERIC Educational Resources Information Center
Bornstein, Marc H.; Hahn, Chun-Shin; Putnick, Diane L.
2016-01-01
This 4-wave longitudinal study evaluated stability of core language skill in 421 European American and African American children, half of whom were identified as low (n = 201) and half of whom were average-to-high (n = 220) in later language skill. Structural equation modeling supported loadings of multivariate age-appropriate multisource measures…
Stability of Core Language Skill from Early Childhood to Adolescence: A Latent Variable Approach
ERIC Educational Resources Information Center
Bornstein, Marc H.; Hahn, Chun-Shin; Putnick, Diane L.; Suwalsky, Joan T. D.
2014-01-01
This four-wave prospective longitudinal study evaluated stability of language in 324 children from early childhood to adolescence. Structural equation modeling supported loadings of multiple age-appropriate multisource measures of child language on single-factor core language skills at 20 months and 4, 10, and 14 years. Large stability…
Multi-Sensor Triangulation of Multi-Source Spatial Data
NASA Technical Reports Server (NTRS)
Habib, Ayman; Kim, Chang-Jae; Bang, Ki-In
2007-01-01
The introduced methodologies are successful in: a) Ising LIDAR features for photogrammetric geo-refererncing; b) Delivering a geo-referenced imagery of the same quality as point-based geo-referencing procedures; c) Taking advantage of the synergistic characteristics of spatial data acquisition systems. The triangulation output can be used for the generation of 3-D perspective views.
ERIC Educational Resources Information Center
Coplan, Robert J.; Arbeau, Kimberley A.; Armer, Mandana
2008-01-01
The goal of this study was to explore the moderating role of maternal personality and parenting characteristics in the links between shyness and adjustment in kindergarten. Participants were 197 children enrolled in kindergarten programs (and their mothers and teachers). Multisource assessment was employed, including maternal ratings, behavioral…
Influence of Feedback, Teacher Praise, and Parental Support on Self-Competency of Third Graders.
ERIC Educational Resources Information Center
Davis, Lonnie H.; And Others
The purpose of this study was to demonstrate how an early assessment of self-competency can be combined with an effective program for preventing maladaptive affective (self-competency) and academic skills. Eleven third graders participated in this study of three interventions. Feedback of multisource data, teacher praise (positive reinforcement),…
USDA-ARS?s Scientific Manuscript database
The overall objectives of this study were to determine if a correlation exists between individual pharmacokinetic parameters and treatment outcome when feeder cattle were diagnosed with bovine respiratory disease (BRD) and treated with gamithromycin (Zactran®) at the label dose, and if there was a s...
ERIC Educational Resources Information Center
Coplan, Robert J.; Weeks, Murray
2010-01-01
The goal of this study was to explore the socioemotional adjustment of unsociable (versus shy) children in middle childhood. The participants in this study were 186 children aged 6-8 years (M[subscript age] = 7.59 years, SD = 0.31). Multisource assessment was employed, including maternal ratings, teacher ratings, and individual child interviews.…
Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements
NASA Astrophysics Data System (ADS)
Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.
2012-12-01
The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.
Zhang, Xiao-Bo; Li, Meng; Wang, Hui; Guo, Lan-Ping; Huang, Lu-Qi
2017-11-01
In literature, there are many information on the distribution of Chinese herbal medicine. Limited by the technical methods, the origin of Chinese herbal medicine or distribution of information in ancient literature were described roughly. It is one of the main objectives of the national census of Chinese medicine resources, which is the background information of the types and distribution of Chinese medicine resources in the region. According to the national Chinese medicine resource census technical specifications and pilot work experience, census team with "3S" technology, computer network technology, digital camera technology and other modern technology methods, can effectively collect the location information of traditional Chinese medicine resources. Detailed and specific location information, such as regional differences in resource endowment and similarity, biological characteristics and spatial distribution, the Chinese medicine resource census data access to the accuracy and objectivity evaluation work, provide technical support and data support. With the support of spatial information technology, based on location information, statistical summary and sharing of multi-source census data can be realized. The integration of traditional Chinese medicine resources and related basic data can be a spatial integration, aggregation and management of massive data, which can help for the scientific rules data mining of traditional Chinese medicine resources from the overall level and fully reveal its scientific connotation. Copyright© by the Chinese Pharmaceutical Association.
NASA Astrophysics Data System (ADS)
Evans, J. D.; Hao, W.; Chettri, S. R.
2014-12-01
Disaster risk management has grown to rely on earth observations, multi-source data analysis, numerical modeling, and interagency information sharing. The practice and outcomes of disaster risk management will likely undergo further change as several emerging earth science technologies come of age: mobile devices; location-based services; ubiquitous sensors; drones; small satellites; satellite direct readout; Big Data analytics; cloud computing; Web services for predictive modeling, semantic reconciliation, and collaboration; and many others. Integrating these new technologies well requires developing and adapting them to meet current needs; but also rethinking current practice to draw on new capabilities to reach additional objectives. This requires a holistic view of the disaster risk management enterprise and of the analytical or operational capabilities afforded by these technologies. One helpful tool for this assessment, the GEOSS Architecture for the Use of Remote Sensing Products in Disaster Management and Risk Assessment (Evans & Moe, 2013), considers all phases of the disaster risk management lifecycle for a comprehensive set of natural hazard types, and outlines common clusters of activities and their use of information and computation resources. We are using these architectural views, together with insights from current practice, to highlight effective, interrelated roles for emerging earth science technologies in disaster risk management. These roles may be helpful in creating roadmaps for research and development investment at national and international levels.
Spatial characterization of the meltwater field from icebergs in the Weddell Sea.
Helly, John J; Kaufmann, Ronald S; Vernet, Maria; Stephenson, Gordon R
2011-04-05
We describe the results from a spatial cyberinfrastructure developed to characterize the meltwater field around individual icebergs and integrate the results with regional- and global-scale data. During the course of the cyberinfrastructure development, it became clear that we were also building an integrated sampling planning capability across multidisciplinary teams that provided greater agility in allocating expedition resources resulting in new scientific insights. The cyberinfrastructure-enabled method is a complement to the conventional methods of hydrographic sampling in which the ship provides a static platform on a station-by-station basis. We adapted a sea-floor mapping method to more rapidly characterize the sea surface geophysically and biologically. By jointly analyzing the multisource, continuously sampled biological, chemical, and physical parameters, using Global Positioning System time as the data fusion key, this surface-mapping method enables us to examine the relationship between the meltwater field of the iceberg to the larger-scale marine ecosystem of the Southern Ocean. Through geospatial data fusion, we are able to combine very fine-scale maps of dynamic processes with more synoptic but lower-resolution data from satellite systems. Our results illustrate the importance of spatial cyberinfrastructure in the overall scientific enterprise and identify key interfaces and sources of error that require improved controls for the development of future Earth observing systems as we move into an era of peta- and exascale, data-intensive computing.
Spatial characterization of the meltwater field from icebergs in the Weddell Sea
Helly, John J.; Kaufmann, Ronald S.; Vernet, Maria; Stephenson, Gordon R.
2011-01-01
We describe the results from a spatial cyberinfrastructure developed to characterize the meltwater field around individual icebergs and integrate the results with regional- and global-scale data. During the course of the cyberinfrastructure development, it became clear that we were also building an integrated sampling planning capability across multidisciplinary teams that provided greater agility in allocating expedition resources resulting in new scientific insights. The cyberinfrastructure-enabled method is a complement to the conventional methods of hydrographic sampling in which the ship provides a static platform on a station-by-station basis. We adapted a sea-floor mapping method to more rapidly characterize the sea surface geophysically and biologically. By jointly analyzing the multisource, continuously sampled biological, chemical, and physical parameters, using Global Positioning System time as the data fusion key, this surface-mapping method enables us to examine the relationship between the meltwater field of the iceberg to the larger-scale marine ecosystem of the Southern Ocean. Through geospatial data fusion, we are able to combine very fine-scale maps of dynamic processes with more synoptic but lower-resolution data from satellite systems. Our results illustrate the importance of spatial cyberinfrastructure in the overall scientific enterprise and identify key interfaces and sources of error that require improved controls for the development of future Earth observing systems as we move into an era of peta- and exascale, data-intensive computing. PMID:21444769
NASA Astrophysics Data System (ADS)
Lasaponara, R.; Masini, N.; Holmgren, R.; Backe Forsberg, Y.
2012-08-01
The objective of this research is to detect and extract traces of past human activities on the Etruscan site of San Giovenale (Blera) in Northern Lazio, Italy. Investigations have been conducted by integrating high-resolution satellite data with digital models derived from LiDAR survey and multisensory aerial prospection (traditional, thermal and near infrared pictures). The use of different sensor technologies is requested to cope with (i) different types of surface covers, i.e. vegetated and non-vegetated areas (trees, bushes, agricultural uses, etc), (ii) variety of archaeological marks (micro-relief, crop marks, etc) and (iii) different types of expected spatial/spectral feature patterns linked to past human activities (urban necropoleis, palaeorivers, etc). Field surveys enabled us to confirm remotely sensed features which were detected in both densely and sparsely vegetated areas, thus revealing a large variety of cultural transformations, ritual and infrastructural remains such as roads, tombs and water installations. Our findings clearly point out a connection between the Vignale plateau and the main acropolis (San Giovenale) as well as with the surrounding burial grounds. Our results suggest that the synergic use of multisensory/multisource data sets, including ancillary information, provides a comprehensive overview of new findings. This facilitates the interpretation of various results obtained from different sensors when studied in a larger prospective.
Gene prioritization and clustering by multi-view text mining
2010-01-01
Background Text mining has become a useful tool for biologists trying to understand the genetics of diseases. In particular, it can help identify the most interesting candidate genes for a disease for further experimental analysis. Many text mining approaches have been introduced, but the effect of disease-gene identification varies in different text mining models. Thus, the idea of incorporating more text mining models may be beneficial to obtain more refined and accurate knowledge. However, how to effectively combine these models still remains a challenging question in machine learning. In particular, it is a non-trivial issue to guarantee that the integrated model performs better than the best individual model. Results We present a multi-view approach to retrieve biomedical knowledge using different controlled vocabularies. These controlled vocabularies are selected on the basis of nine well-known bio-ontologies and are applied to index the vast amounts of gene-based free-text information available in the MEDLINE repository. The text mining result specified by a vocabulary is considered as a view and the obtained multiple views are integrated by multi-source learning algorithms. We investigate the effect of integration in two fundamental computational disease gene identification tasks: gene prioritization and gene clustering. The performance of the proposed approach is systematically evaluated and compared on real benchmark data sets. In both tasks, the multi-view approach demonstrates significantly better performance than other comparing methods. Conclusions In practical research, the relevance of specific vocabulary pertaining to the task is usually unknown. In such case, multi-view text mining is a superior and promising strategy for text-based disease gene identification. PMID:20074336
Collaborative classification of hyperspectral and visible images with convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Mengmeng; Li, Wei; Du, Qian
2017-10-01
Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.
State-of-the-Art: DTM Generation Using Airborne LIDAR Data
Chen, Ziyue; Gao, Bingbo; Devereux, Bernard
2017-01-01
Digital terrain model (DTM) generation is the fundamental application of airborne Lidar data. In past decades, a large body of studies has been conducted to present and experiment a variety of DTM generation methods. Although great progress has been made, DTM generation, especially DTM generation in specific terrain situations, remains challenging. This research introduces the general principles of DTM generation and reviews diverse mainstream DTM generation methods. In accordance with the filtering strategy, these methods are classified into six categories: surface-based adjustment; morphology-based filtering, triangulated irregular network (TIN)-based refinement, segmentation and classification, statistical analysis and multi-scale comparison. Typical methods for each category are briefly introduced and the merits and limitations of each category are discussed accordingly. Despite different categories of filtering strategies, these DTM generation methods present similar difficulties when implemented in sharply changing terrain, areas with dense non-ground features and complicated landscapes. This paper suggests that the fusion of multi-sources and integration of different methods can be effective ways for improving the performance of DTM generation. PMID:28098810
Co-Registration Between Multisource Remote-Sensing Images
NASA Astrophysics Data System (ADS)
Wu, J.; Chang, C.; Tsai, H.-Y.; Liu, M.-C.
2012-07-01
Image registration is essential for geospatial information systems analysis, which usually involves integrating multitemporal and multispectral datasets from remote optical and radar sensors. An algorithm that deals with feature extraction, keypoint matching, outlier detection and image warping is experimented in this study. The methods currently available in the literature rely on techniques, such as the scale-invariant feature transform, between-edge cost minimization, normalized cross correlation, leasts-quares image matching, random sample consensus, iterated data snooping and thin-plate splines. Their basics are highlighted and encoded into a computer program. The test images are excerpts from digital files created by the multispectral SPOT-5 and Formosat-2 sensors, and by the panchromatic IKONOS and QuickBird sensors. Suburban areas, housing rooftops, the countryside and hilly plantations are studied. The co-registered images are displayed with block subimages in a criss-cross pattern. Besides the imagery, the registration accuracy is expressed by the root mean square error. Toward the end, this paper also includes a few opinions on issues that are believed to hinder a correct correspondence between diverse images.
Potential Collaborative Research topics with Korea’s Agency for Defense Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrar, Charles R.; Todd, Michael D.
2012-08-23
This presentation provides a high level summary of current research activities at the Los Alamos National Laboratory (LANL)-University of California Jacobs School of Engineering (UCSD) Engineering Institute that will be presented at Korea's Agency for Defense Development (ADD). These research activities are at the basic engineering science level with different level of maturity ranging from initial concepts to field proof-of-concept demonstrations. We believe that all of these activities are appropriate for collaborative research activities with ADD subject to approval by each institution. All the activities summarized herein have the common theme that they are multi-disciplinary in nature and typically involvedmore » the integration of high-fidelity predictive modeling, advanced sensing technologies and new development in information technology. These activities include: Wireless Sensor Systems, Swarming Robot sensor systems, Advanced signal processing (compressed sensing) and pattern recognition, Model Verification and Validation, Optimal/robust sensor system design, Haptic systems for large-scale data processing, Cyber-physical security for robots, Multi-source energy harvesting, Reliability-based approaches to damage prognosis, SHMTools software development, and Cyber-physical systems advanced study institute.« less
NASA Astrophysics Data System (ADS)
Häme, Tuomas; Mutanen, Teemu; Rauste, Yrjö; Antropov, Oleg; Molinier, Matthieu; Quegan, Shaun; Kantzas, Euripides; Mäkelä, Annikki; Minunno, Francesco; Atli Benediktsson, Jon; Falco, Nicola; Arnason, Kolbeinn; Storvold, Rune; Haarpaintner, Jörg; Elsakov, Vladimir; Rasinmäki, Jussi
2015-04-01
The objective of project North State, funded by Framework Program 7 of the European Union, is to develop innovative data fusion methods that exploit the new generation of multi-source data from Sentinels and other satellites in an intelligent, self-learning framework. The remote sensing outputs are interfaced with state-of-the-art carbon and water flux models for monitoring the fluxes over boreal Europe to reduce current large uncertainties. This will provide a paradigm for the development of products for future Copernicus services. The models to be interfaced are a dynamic vegetation model and a light use efficiency model. We have identified four groups of variables that will be estimated with remote sensed data: land cover variables, forest characteristics, vegetation activity, and hydrological variables. The estimates will be used as model inputs and to validate the model outputs. The earth observation variables are computed as automatically as possible, with an objective to completely automatic estimation. North State has two sites for intensive studies in southern and northern Finland, respectively, one in Iceland and one in state Komi of Russia. Additionally, the model input variables will be estimated and models applied over European boreal and sub-arctic region from Ural Mountains to Iceland. The accuracy assessment of the earth observation variables will follow statistical sampling design. Model output predictions are compared to earth observation variables. Also flux tower measurements are applied in the model assessment. In the paper, results of hyperspectral, Sentinel-1, and Landsat data and their use in the models is presented. Also an example of a completely automatic land cover class prediction is reported.
Estimating Global Impervious Surface based on Social-economic Data and Satellite Observations
NASA Astrophysics Data System (ADS)
Zeng, Z.; Zhang, K.; Xue, X.; Hong, Y.
2016-12-01
Impervious surface areas around the globe are expanding and significantly altering the surface energy balance, hydrology cycle and ecosystem services. Many studies have underlined the importance of impervious surface, r from hydrological modeling to contaminant transport monitoring and urban development estimation. Therefore accurate estimation of the global impervious surface is important for both physical and social sciences. Given the limited coverage of high spatial resolution imagery and ground survey, using satellite remote sensing and geospatial data to estimate global impervious areas is a practical approach. Based on the previous work of area-weighted imperviousness for north branch of the Chicago River provided by HDR, this study developed a method to determine the percentage of impervious surface using latest global land cover categories from multi-source satellite observations, population density and gross domestic product (GDP) data. Percent impervious surface at 30-meter resolution were mapped. We found that 1.33% of the CONUS (105,814 km2) and 0.475% of the land surface (640,370km2) are impervious surfaces. To test the utility and practicality of the proposed method, National Land Cover Database (NLCD) 2011 percent developed imperviousness for the conterminous United States was used to evaluate our results. The average difference between the derived imperviousness from our method and the NLCD data across CONUS is 1.14%, while difference between our results and the NLCD data are within ±1% over 81.63% of the CONUS. The distribution of global impervious surface map indicates that impervious surfaces are primarily concentrated in China, India, Japan, USA and Europe where are highly populated and/or developed. This study proposes a straightforward way of mapping global imperviousness, which can provide useful information for hydrologic modeling and other applications.
Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J
2017-06-01
In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision alternatives. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kissling, W Daniel; Ahumada, Jorge A; Bowser, Anne; Fernandez, Miguel; Fernández, Néstor; García, Enrique Alonso; Guralnick, Robert P; Isaac, Nick J B; Kelling, Steve; Los, Wouter; McRae, Louise; Mihoub, Jean-Baptiste; Obst, Matthias; Santamaria, Monica; Skidmore, Andrew K; Williams, Kristen J; Agosti, Donat; Amariles, Daniel; Arvanitidis, Christos; Bastin, Lucy; De Leo, Francesca; Egloff, Willi; Elith, Jane; Hobern, Donald; Martin, David; Pereira, Henrique M; Pesole, Graziano; Peterseil, Johannes; Saarenmaa, Hannu; Schigel, Dmitry; Schmeller, Dirk S; Segata, Nicola; Turak, Eren; Uhlir, Paul F; Wee, Brian; Hardisty, Alex R
2018-02-01
Much biodiversity data is collected worldwide, but it remains challenging to assemble the scattered knowledge for assessing biodiversity status and trends. The concept of Essential Biodiversity Variables (EBVs) was introduced to structure biodiversity monitoring globally, and to harmonize and standardize biodiversity data from disparate sources to capture a minimum set of critical variables required to study, report and manage biodiversity change. Here, we assess the challenges of a 'Big Data' approach to building global EBV data products across taxa and spatiotemporal scales, focusing on species distribution and abundance. The majority of currently available data on species distributions derives from incidentally reported observations or from surveys where presence-only or presence-absence data are sampled repeatedly with standardized protocols. Most abundance data come from opportunistic population counts or from population time series using standardized protocols (e.g. repeated surveys of the same population from single or multiple sites). Enormous complexity exists in integrating these heterogeneous, multi-source data sets across space, time, taxa and different sampling methods. Integration of such data into global EBV data products requires correcting biases introduced by imperfect detection and varying sampling effort, dealing with different spatial resolution and extents, harmonizing measurement units from different data sources or sampling methods, applying statistical tools and models for spatial inter- or extrapolation, and quantifying sources of uncertainty and errors in data and models. To support the development of EBVs by the Group on Earth Observations Biodiversity Observation Network (GEO BON), we identify 11 key workflow steps that will operationalize the process of building EBV data products within and across research infrastructures worldwide. These workflow steps take multiple sequential activities into account, including identification and aggregation of various raw data sources, data quality control, taxonomic name matching and statistical modelling of integrated data. We illustrate these steps with concrete examples from existing citizen science and professional monitoring projects, including eBird, the Tropical Ecology Assessment and Monitoring network, the Living Planet Index and the Baltic Sea zooplankton monitoring. The identified workflow steps are applicable to both terrestrial and aquatic systems and a broad range of spatial, temporal and taxonomic scales. They depend on clear, findable and accessible metadata, and we provide an overview of current data and metadata standards. Several challenges remain to be solved for building global EBV data products: (i) developing tools and models for combining heterogeneous, multi-source data sets and filling data gaps in geographic, temporal and taxonomic coverage, (ii) integrating emerging methods and technologies for data collection such as citizen science, sensor networks, DNA-based techniques and satellite remote sensing, (iii) solving major technical issues related to data product structure, data storage, execution of workflows and the production process/cycle as well as approaching technical interoperability among research infrastructures, (iv) allowing semantic interoperability by developing and adopting standards and tools for capturing consistent data and metadata, and (v) ensuring legal interoperability by endorsing open data or data that are free from restrictions on use, modification and sharing. Addressing these challenges is critical for biodiversity research and for assessing progress towards conservation policy targets and sustainable development goals. © 2017 The Authors. Biological Reviews published by John Wiley & Sons Ltd on behalf of Cambridge Philosophical Society.
Discriminant Validity of Self-Reported Emotional Intelligence: A Multitrait-Multisource Study
ERIC Educational Resources Information Center
Joseph, Dana L.; Newman, Daniel A.
2010-01-01
A major stumbling block for emotional intelligence (EI) research has been the lack of adequate evidence for discriminant validity. In a sample of 280 dyads, self- and peer-reports of EI and Big Five personality traits were used to confirm an a priori four-factor model for the Wong and Law Emotional Intelligence Scale (WLEIS) and a five-factor…
ERIC Educational Resources Information Center
Blitz, Mark H.; Modeste, Marsha
2015-01-01
The Comprehensive Assessment of Leadership for Learning (CALL) is a multi-source assessment of distributed instructional leadership. As part of the validation of CALL, researchers examined differences between teacher and leader ratings in assessing distributed leadership practices. The authors utilized a t-test for equality of means for the…
Applying an efficient K-nearest neighbor search to forest attribute imputation
Andrew O. Finley; Ronald E. McRoberts; Alan R. Ek
2006-01-01
This paper explores the utility of an efficient nearest neighbor (NN) search algorithm for applications in multi-source kNN forest attribute imputation. The search algorithm reduces the number of distance calculations between a given target vector and each reference vector, thereby, decreasing the time needed to discover the NN subset. Results of five trials show gains...
Conceptualizing High School Students' Mental Health through a Dual-Factor Model
ERIC Educational Resources Information Center
Suldo, Shannon M; Thalji-Raitano, Amanda; Kiefer, Sarah M.; Ferron, John M.
2016-01-01
Mental health is increasingly viewed as a complete state of being, consisting of the absence of psychopathology and the presence of positive factors such as subjective well-being (SWB). This cross-sectional study analyzed multimethod and multisource data for 500 high school students (ages 14-18 years, M = 15.27 years, SD = 1.0 years) to examine…
ERIC Educational Resources Information Center
Berkovich, Izhak; Eyal, Ori
2017-01-01
The present study aims to examine whether principals' emotional intelligence (specifically, their ability to recognize emotions in others) makes them more effective transformational leaders, measured by the reframing of teachers' emotions. The study uses multisource data from principals and their teachers in 69 randomly sampled primary schools.…
ERIC Educational Resources Information Center
Prager, Carolyn; And Others
The education and reeducation of health care professionals remain essential, if somewhat neglected, elements in reforming the nation's health care system. The Pew Health Professions Commission (PHPC) has made the reform of health care contingent upon the reform of education, urging educational institutions to design core curricula with…
Group Multilateral Relation Analysis Based on Large Data
NASA Astrophysics Data System (ADS)
LIU, Qiang; ZHOU, Guo-min; CHEN, Guang-xuan; XU, Yong
2017-09-01
Massive, multi-source, heterogeneous police data and social data brings challenges to the current police work. The existing massive data resources are studied as the research object to excavate the group of multilateral relations by using large data technology for data archiving. The results of the study could provide technical support to police enforcement departments for fighting crime and preventing crime.
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng
2016-05-01
In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.
A multi-source feedback tool for measuring a subset of Pediatrics Milestones.
Schwartz, Alan; Margolis, Melissa J; Multerer, Sara; Haftel, Hilary M; Schumacher, Daniel J
2016-10-01
The Pediatrics Milestones Assessment Pilot employed a new multisource feedback (MSF) instrument to assess nine Pediatrics Milestones among interns and subinterns in the inpatient context. To report validity evidence for the MSF tool for informing milestone classification decisions. We obtained MSF instruments by different raters per learner per rotation. We present evidence for validity based on the unified validity framework. One hundred and ninety two interns and 41 subinterns at 18 Pediatrics residency programs received a total of 1084 MSF forms from faculty (40%), senior residents (34%), nurses (22%), and other staff (4%). Variance in ratings was associated primarily with rater (32%) and learner (22%). The milestone factor structure fit data better than simpler structures. In domains except professionalism, ratings by nurses were significantly lower than those by faculty and ratings by other staff were significantly higher. Ratings were higher when the rater observed the learner for longer periods and had a positive global opinion of the learner. Ratings of interns and subinterns did not differ, except for ratings by senior residents. MSF-based scales correlated with summative milestone scores. We obtain moderately reliable MSF ratings of interns and subinterns in the inpatient context to inform some milestone assignments.
Research for the jamming mechanism of high-frequency laser to the laser seeker
NASA Astrophysics Data System (ADS)
Zheng, Xingyuan; Zhang, Haiyang; Wang, Yunping; Feng, Shuang; Zhao, Changming
2013-08-01
High-frequency laser will be able to enter the enemy laser signal processing systems without encoded identification and a copy. That makes it one of the research directions of new interference sources. In order to study the interference mechanism of high-frequency laser to laser guided weapons. According to the principle of high-frequency laser interference, a series of related theoretical models such as a semi-active laser seeker coded identification model, a time door model, multi-signal processing model and a interference signal modulation processing model are established. Then seeker interfere with effective 3σ criterion is proposed. Based on this, the study of the effect of multi-source interference and signal characteristics of the effect of high repetition frequency laser interference are key research. According to the simulation system testing, the results show that the multi-source interference and interference signal frequency modulation can effectively enhance the interference effect. While the interference effect of the interference signal amplitude modulation is not obvious. The research results will provide the evaluation of high-frequency laser interference effect and provide theoretical references for high-frequency laser interference system application.
Kaplan, Haim; Kaplan, Lilach
2016-12-01
In the recent years, there is a growth in demand for radiofrequency (RF)-based procedures to improve skin texture, laxity and contour. The new generation of systems allow non-invasive and fractional resurfacing treatments on one platform. The aim of this study was to evaluate the safety and efficacy of a new treatment protocol using a multisource RF, combining 3 different modalities in each patient: [1] non-ablative RF skin tightening, [2] fractional skin resurfacing, and [3] microneedling RF for non-ablative coagulation and collagen remodelling. 14 subjects were enrolled in this study using EndyMed PRO ™ platform. Each patient had 8 non-ablative treatments and 4 fractional treatments (fractional skin resurfacing and Intensif). The global aesthetic score was used to evaluate improvement. All patients had improvement in skin appearance. About 43% had excellent or very good improvement above 50%, 18% had good improvement between 25 and 50%, and the rest 39% had a mild improvement of < 25%. Downtime was minimal and no adverse effect was reported. Our data show significant improvement of skin texture, skin laxity and wrinkle reduction achieved using RF treatment platform.
NASA Astrophysics Data System (ADS)
Nghiem, S. V.; Small, C.; Jacobson, M. Z.; Brakenridge, G. R.; Balk, D.; Sorichetta, A.; Masetti, M.; Gaughan, A. E.; Stevens, F. R.; Mathews, A.; Frazier, A. E.; Das, N. N.
2017-12-01
An innovative paradigm to observe the rural-urban transformation over the landscape using multi-sourced satellite data is formulated as a time and space continuum, extensively in space across South and Southeast Asia and in time over a decadal scale. Rather than a disparate array of individual cities and their vicinities in separated areas and in a discontinuous collection of points in time, the time-space continuum paradigm enables significant advances in addressing rural-urban change as a continuous gradient across the landscape from the wilderness to rural to urban areas to study challenging environmental and socioeconomic issues. We use satellite data including QuikSCAT scatterometer, SRTM and Sentinel-1 SAR, Landsat, WorldView, MODIS, and SMAP together with environmental and demographic data and modeling products to investigate land cover and land use change in South and Southeast Asia and associated impacts. Utilizing the new observational advances and effectively capitalizing current capabilities, we will present interdisciplinary results on urbanization in three dimensions, flood and drought, wildfire, air and water pollution, urban change, policy effects, population dynamics and vector-borne disease, agricultural assessment, and land degradation and desertification.
Chenxi, Li; Chen, Yanni; Li, Youjun; Wang, Jue; Liu, Tian
2016-06-01
The multiscale entropy (MSE) is a novel method for quantifying the intrinsic dynamical complexity of physiological systems over several scales. To evaluate this method as a promising way to explore the neural mechanisms in ADHD, we calculated the MSE in EEG activity during the designed task. EEG data were collected from 13 outpatient boys with a confirmed diagnosis of ADHD and 13 age- and gender-matched normal control children during their doing multi-source interference task (MSIT). We estimated the MSE by calculating the sample entropy values of delta, theta, alpha and beta frequency bands over twenty time scales using coarse-grained procedure. The results showed increased complexity of EEG data in delta and theta frequency bands and decreased complexity in alpha frequency bands in ADHD children. The findings of this study revealed aberrant neural connectivity of kids with ADHD during interference task. The results showed that MSE method may be a new index to identify and understand the neural mechanism of ADHD. Copyright © 2016 Elsevier Inc. All rights reserved.
Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.
NASA Astrophysics Data System (ADS)
Liu, Y.; Li, Y.
2016-12-01
We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.
NASA Astrophysics Data System (ADS)
Bénédic, Fabien; Baudrillart, Benoit; Achard, Jocelyn
2018-02-01
In this paper we investigate a distributed antenna array Plasma Enhanced Chemical Vapor Deposition system, composed of 16 microwave plasma sources arranged in a 2D matrix, which enables the growth of 4-in. diamond films at low pressure and low substrate temperature by using H2/CH4/CO2 gas chemistry. A self-consistent two-dimensional plasma model developed for hydrogen discharges is used to study the discharge behavior. Especially, the gas temperature is estimated close to 350 K at a position corresponding to the substrate location during the growth, which is suitable for low temperature deposition. Multi-source discharge modeling evidences that the uniformity of the plasma sheet formed by the individual plasmas ignited around each elementary microwave source strongly depends on the distance to the antennas. The radial profile of the film thickness homogeneity may be thus linked to the local variations of species density. Contribution to the topical issue "Plasma Sources and Plasma Processes (PSPP)", edited by Luis Lemos Alves, Thierry Belmonte and Tibeinea Minea.
Integration of remote sensing and geophysical techniques for coastal monitoring
NASA Astrophysics Data System (ADS)
Simoniello, T.; Carone, M. T.; Loperte, A.; Satriani, A.; Imbrenda, V.; D'Emilio, M.; Guariglia, A.
2009-04-01
Coastal areas are of great environmental, economic, social, cultural and recreational relevance; therefore, the implementation of suitable monitoring and protection actions is fundamental for their preservation and for assuring future use of this resource. Such actions have to be based on an ecosystem perspective for preserving coastal environment integrity and functioning and for planning sustainable resource management of both the marine and terrestrial components (ICZM-EU initiative). We implemented an integrated study based on remote sensing and geophysical techniques for monitoring a coastal area located along the Ionian side of Basilicata region (Southern Italy). This area, between the Bradano and Basento river mouths, is mainly characterized by a narrow shore (10-30 m) of fine sandy formations and by a pine forest planted in the first decade of 50's in order to preserve the coast and the inland cultivated areas. Due to drought and fire events and saltwater intrusion phenomena, such a forest is affected by a strong decline with consequent environmental problems. Multispectral satellite data were adopted for evaluating the spatio-temporal features of coastal vegetation and the structure of forested patterns. The increase or decrease in vegetation activity was analyzed from trends estimated on a time series of NDVI (Normalized Difference Vegetation Index) maps. The fragmentation/connection levels of vegetated patterns was assessed form a set of landscape ecology metrics elaborated at different structure scales (patch, class and landscape) on satellite cover classifications. Information on shoreline changes were derived form a multi-source data set (satellite data, field-GPS surveys and Aerial Laser Scanner acquisitions) by taking also into account tidal effects. Geophysical campaigns were performed for characterizing soil features and limits of salty water infiltrations. Form vertical resistivity soundings (VES), soil resistivity maps at different a deeps (0.5-1.0-1.5m) were obtained; in addition electrical resistivity tomographies (ERT) were acquired with different orientations and lengths. The analysis of vegetation activity from satellite data identified large patches affected by vegetation decline and fragmentation processes, where geophysical measurements highlighted a salt water infiltration. Moreover, they showed that such a phenomenon has not only a horizontal distribution, but also a vertical diffusion interesting the layer active for plant roots. Since a severe shoreline regression (up to 90m) was observed along the investigated coast, erosional process could have increased the saltwater intrusion process during the last 20 years. On the whole, the obtained results suggest that the integration of remote sensing peculiarities (synoptic view, multi-temporal availability) with those of geophysical techniques (local details, non-invasive soundings) can be a suitable support tool for planning and management activities in coastal areas (e.g., the identification of the most appropriated sites for ecological interventions or for barrage and earthen block construction).
Huntington disease reduced penetrance alleles occur at high frequency in the general population
Kay, Chris; Collins, Jennifer A.; Miedzybrodzka, Zosia; Madore, Steven J.; Gordon, Erynn S.; Gerry, Norman; Davidson, Mark; Slama, Ramy A.
2016-01-01
Objective: To directly estimate the frequency and penetrance of CAG repeat alleles associated with Huntington disease (HD) in the general population. Methods: CAG repeat length was evaluated in 7,315 individuals from 3 population-based cohorts from British Columbia, the United States, and Scotland. The frequency of ≥36 CAG alleles was assessed out of a total of 14,630 alleles. The general population frequency of reduced penetrance alleles (36–39 CAG) was compared to the prevalence of patients with HD with genetically confirmed 36–39 CAG from a multisource clinical ascertainment in British Columbia, Canada. The penetrance of 36–38 CAG repeat alleles for HD was estimated for individuals ≥65 years of age and compared against previously reported clinical penetrance estimates. Results: A total of 18 of 7,315 individuals had ≥36 CAG, revealing that approximately 1 in 400 individuals from the general population have an expanded CAG repeat associated with HD (0.246%). Individuals with CAG 36–37 genotypes are the most common (36, 0.096%; 37, 0.082%; 38, 0.027%; 39, 0.000%; ≥40, 0.041%). General population CAG 36–38 penetrance rates are lower than penetrance rates extrapolated from clinical cohorts. Conclusion: HD alleles with a CAG repeat length of 36–38 occur at high frequency in the general population. The infrequent diagnosis of HD at this CAG length is likely due to low penetrance. Another important contributing factor may be reduced ascertainment of HD in those of older age. PMID:27335115
Development of a 2001 National Land Cover Database for the United States
Homer, Collin G.; Huang, Chengquan; Yang, Limin; Wylie, Bruce K.; Coan, Michael
2004-01-01
Multi-Resolution Land Characterization 2001 (MRLC 2001) is a second-generation Federal consortium designed to create an updated pool of nation-wide Landsat 5 and 7 imagery and derive a second-generation National Land Cover Database (NLCD 2001). The objectives of this multi-layer, multi-source database are two fold: first, to provide consistent land cover for all 50 States, and second, to provide a data framework which allows flexibility in developing and applying each independent data component to a wide variety of other applications. Components in the database include the following: (1) normalized imagery for three time periods per path/row, (2) ancillary data, including a 30 m Digital Elevation Model (DEM) derived into slope, aspect and slope position, (3) perpixel estimates of percent imperviousness and percent tree canopy, (4) 29 classes of land cover data derived from the imagery, ancillary data, and derivatives, (5) classification rules, confidence estimates, and metadata from the land cover classification. This database is now being developed using a Mapping Zone approach, with 66 Zones in the continental United States and 23 Zones in Alaska. Results from three initial mapping Zones show single-pixel land cover accuracies ranging from 73 to 77 percent, imperviousness accuracies ranging from 83 to 91 percent, tree canopy accuracies ranging from 78 to 93 percent, and an estimated 50 percent increase in mapping efficiency over previous methods. The database has now entered the production phase and is being created using extensive partnering in the Federal government with planned completion by 2006.
ERIC Educational Resources Information Center
Blickle, Gerhard; Schneider, Paula B.; Perrewe, Pamela L.; Blass, Fred R.; Ferris, Gerald R.
2008-01-01
Purpose: The purpose of this study was to investigate the role of protege self-presentation by self-disclosure, modesty, and self-monitoring in mentoring. Design/methodology/approach: This study used three data sources (i.e. employees, peers, and mentors) and a longitudinal design over a period of two years. Findings: Employee self-disclosure and…
Time-Resolved and Spectroscopic Three-Dimensional Optical Breast Tomography
2009-03-01
polarization sensitive imaging 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON R. R...project; • Development of a near-infrared center of intensity time gated imaging approach; and • Polarization sensitive imaging. We provide an...spectroscopic imaging arrangement, and a multi-source illumination and multi- detector signal acquisition arrangement. 5 5.1.1. Time-resolved transillumination
The Multi-energy High precision Data Processor Based on AD7606
NASA Astrophysics Data System (ADS)
Zhao, Chen; Zhang, Yanchi; Xie, Da
2017-11-01
This paper designs an information collector based on AD7606 to realize the high-precision simultaneous acquisition of multi-source information of multi-energy systems to form the information platform of the energy Internet at Laogang with electricty as its major energy source. Combined with information fusion technologies, this paper analyzes the data to improve the overall energy system scheduling capability and reliability.
ERIC Educational Resources Information Center
Sonnentag, Sabine; Kuttler, Iris; Fritz, Charlotte
2010-01-01
This paper examines psychological detachment (i.e., mentally "switching off") from work during non-work time as a partial mediator between job stressors and low work-home boundaries on the one hand and strain reactions (emotional exhaustion, need for recovery) on the other hand. Survey data were collected from a sample of protestant pastors (N =…
1979-12-01
vegetation shows on the imagery but emphasis has been placed on the detection of wooded and scrub areas and the differentiation between deciduous and...S. A., 1974b, Phenology and remote sensing, phenology and seasonality modeling: in Helmut Lieth, H. (ed.), Ecological Studies-Analysis and Synthesis...Remote Sensing of Ecology , University of d-eorgia Press, Athens, Georgia, p. 63-94. Phillipson, W. R. and T. Liang, 1975, Airphoto analysis in the
ERIC Educational Resources Information Center
Kim, Loretta; Wong, Shun Han Rebekah
2015-01-01
This article discusses the objectives and outcomes of a project to enhance digital humanities training at the undergraduate level in a Hong Kong university. The co-investigators re-designed a multi-source data-set as an example and then taught a multi-step curriculum about gathering, organizing, and presenting original data to an introductory…
Data-to-Decisions S&T Priority Initiative
2011-11-08
Context Mapping − Track Performance Model Multi-Source Tracking − Track Fusion − Track through Gaps − Move-Stop-Move Performance Based ...Decisions S&T Priority Initiative Dr. Carey Schwartz PSC Lead Office of Naval Research NDIA Disruptive Technologies Conference November 8-9, 2011...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Office of Naval Research ,875 North Randolph Street , Arlington,VA,2217 8. PERFORMING ORGANIZATION REPORT
ERIC Educational Resources Information Center
Restubog, Simon Lloyd D.; Scott, Kristin L.; Zagenczyk, Thomas J.
2011-01-01
We developed a model of the relationships among aggressive norms, abusive supervision, psychological distress, family undermining, and supervisor-directed deviance. We tested the model in 2 studies using multisource data: a 3-wave investigation of 184 full-time employees (Study 1) and a 2-wave investigation of 188 restaurant workers (Study 2).…
Elman, Monica; Harth, Yoram
2011-01-01
The basic properties of lasers and pulsed light sources limit their ability to deliver high energy to the dermis and subcutaneous tissues without excessive damage to the epidermis. Radiofrequency was shown to penetrate deeper than optical light sources independent of skin color. The early RF-based devices used single source bipolar RF, which is safe but limited in use due to the superficial flow of energy between the two bipolar electrodes. Another type of single source RF employs a single electrode (monopolar) in which the RF energy flows from one electrode on the surface of the skin through the entire body to a plate under the body. Although more effective than bipolar, this devices require intense active cooling of the skin and may be associated with considerable pain and other systemic and local safety concerns. Latest generation of RF technology developed by EndyMed Medical Ltd. (Caesarea, Israel) utilizes simultaneously six or more phase controlled RF generators (3DEEP technology). The multiple electrical fields created by the multiple sources "repel" or "attract" each other, leading to the precise 3 dimensional delivery of RF energy to the dermal and sub-dermal targets minimizing the energy flow through the epidermis without the need for active cooling. Confocal microscopy of the skin has shown that 6 treatment sessions of Multisource RF technology improve skin structure features. The skin after treatment had longer and narrower dermal papilla and denser and finer collagen fiber typical to younger skin as compared to pre treatment skin. Ultrasound of the skin showed after 6 treatment sessions reduction of 10 percent in the thickness of the subcutaneous fat layer. Non ablative facial clinical studies showed a significant reduction of wrinkles after treatment further reduced at 3 months follow-up. Body treatment studies showed a circumference reduction of 2.9 cm immediately after 6 treatments, and 2 cm at 12 months after the end of treatment, proving long term collagen remodeling effect. Clinical studies of the multisource fractional RF application have shown significant effects on wrinkles reduction and deep atrophic acne scars after 1-3 treatment sessions.
Elman, Monica; Harth, Yoram
2011-01-01
The basic properties of lasers and pulsed light sources limit their ability to deliver high energy to the dermis and subcutaneous tissues without excessive damage to the epidermis. Radiofrequency was shown to penetrate deeper than optical light sources independent of skin color. The early RF-based devices used single source bipolar RF, which is safe but limited in use due to the superficial flow of energy between the two bipolar electrodes. Another type of single source RF employs a single electrode (monopolar) in which the RF energy flows from one electrode on the surface of the skin through the entire body to a plate under the body. Although more effective than bipolar, this devices require intense active cooling of the skin and may be associated with considerable pain and other systemic and local safety concerns. Latest generation of RF technology developed by EndyMed Medical Ltd. (Caesarea, Israel) utilizes simultaneously six or more phase controlled RF generators (3DEEP technology). The multiple electrical fields created by the multiple sources “repel” or “attract” each other, leading to the precise 3 dimensional delivery of RF energy to the dermal and sub-dermal targets minimizing the energy flow through the epidermis without the need for active cooling. Confocal microscopy of the skin has shown that 6 treatment sessions of Multisource RF technology improve skin structure features. The skin after treatment had longer and narrower dermal papilla and denser and finer collagen fiber typical to younger skin as compared to pre treatment skin. Ultrasound of the skin showed after 6 treatment sessions reduction of 10 percent in the thickness of the subcutaneous fat layer. Non ablative facial clinical studies showed a significant reduction of wrinkles after treatment further reduced at 3 months follow-up. Body treatment studies showed a circumference reduction of 2.9 cm immediately after 6 treatments, and 2 cm at 12 months after the end of treatment, proving long term collagen remodeling effect. Clinical studies of the multisource fractional RF application have shown significant effects on wrinkles reduction and deep atrophic acne scars after 1–3 treatment sessions. PMID:24155523
NASA Astrophysics Data System (ADS)
Torres-Martínez, J. A.; Seddaiu, M.; Rodríguez-Gonzálvez, P.; Hernández-López, D.; González-Aguilera, D.
2015-02-01
The complexity of archaeological sites hinders to get an integral modelling using the actual Geomatic techniques (i.e. aerial, closerange photogrammetry and terrestrial laser scanner) individually, so a multi-sensor approach is proposed as the best solution to provide a 3D reconstruction and visualization of these complex sites. Sensor registration represents a riveting milestone when automation is required and when aerial and terrestrial dataset must be integrated. To this end, several problems must be solved: coordinate system definition, geo-referencing, co-registration of point clouds, geometric and radiometric homogeneity, etc. Last but not least, safeguarding of tangible archaeological heritage and its associated intangible expressions entails a multi-source data approach in which heterogeneous material (historical documents, drawings, archaeological techniques, habit of living, etc.) should be collected and combined with the resulting hybrid 3D of "Tolmo de Minateda" located models. The proposed multi-data source and multi-sensor approach is applied to the study case of "Tolmo de Minateda" archaeological site. A total extension of 9 ha is reconstructed, with an adapted level of detail, by an ultralight aerial platform (paratrike), an unmanned aerial vehicle, a terrestrial laser scanner and terrestrial photogrammetry. In addition, the own defensive nature of the site (i.e. with the presence of three different defensive walls) together with the considerable stratification of the archaeological site (i.e. with different archaeological surfaces and constructive typologies) require that tangible and intangible archaeological heritage expressions can be integrated with the hybrid 3D models obtained, to analyse, understand and exploit the archaeological site by different experts and heritage stakeholders.
Back-Projection Cortical Potential Imaging: Theory and Results.
Haor, Dror; Shavit, Reuven; Shapiro, Moshe; Geva, Amir B
2017-07-01
Electroencephalography (EEG) is the single brain monitoring technique that is non-invasive, portable, passive, exhibits high-temporal resolution, and gives a directmeasurement of the scalp electrical potential. Amajor disadvantage of the EEG is its low-spatial resolution, which is the result of the low-conductive skull that "smears" the currents coming from within the brain. Recording brain activity with both high temporal and spatial resolution is crucial for the localization of confined brain activations and the study of brainmechanismfunctionality, whichis then followed by diagnosis of brain-related diseases. In this paper, a new cortical potential imaging (CPI) method is presented. The new method gives an estimation of the electrical activity on the cortex surface and thus removes the "smearing effect" caused by the skull. The scalp potentials are back-projected CPI (BP-CPI) onto the cortex surface by building a well-posed problem to the Laplace equation that is solved by means of the finite elements method on a realistic head model. A unique solution to the CPI problem is obtained by introducing a cortical normal current estimation technique. The technique is based on the same mechanism used in the well-known surface Laplacian calculation, followed by a scalp-cortex back-projection routine. The BP-CPI passed four stages of validation, including validation on spherical and realistic head models, probabilistic analysis (Monte Carlo simulation), and noise sensitivity tests. In addition, the BP-CPI was compared with the minimum norm estimate CPI approach and found superior for multi-source cortical potential distributions with very good estimation results (CC >0.97) on a realistic head model in the regions of interest, for two representative cases. The BP-CPI can be easily incorporated in different monitoring tools and help researchers by maintaining an accurate estimation for the cortical potential of ongoing or event-related potentials in order to have better neurological inferences from the EEG.
NASA Astrophysics Data System (ADS)
Liu, Y.; Zhou, J.; Song, L.; Zou, Q.; Guo, J.; Wang, Y.
2014-02-01
In recent years, an important development in flood management has been the focal shift from flood protection towards flood risk management. This change greatly promoted the progress of flood control research in a multidisciplinary way. Moreover, given the growing complexity and uncertainty in many decision situations of flood risk management, traditional methods, e.g., tight-coupling integration of one or more quantitative models, are not enough to provide decision support for managers. Within this context, this paper presents a beneficial methodological framework to enhance the effectiveness of decision support systems, through the dynamic adaptation of support regarding the needs of the decision-maker. In addition, we illustrate a loose-coupling technical prototype for integrating heterogeneous elements, such as multi-source data, multidisciplinary models, GIS tools and existing systems. The main innovation is the application of model-driven concepts, which put the system in a state of continuous iterative optimization. We define the new system as a model-driven decision support system (MDSS ). Two characteristics that differentiate the MDSS are as follows: (1) it is made accessible to non-technical specialists; and (2) it has a higher level of adaptability and compatibility. Furthermore, the MDSS was employed to manage the flood risk in the Jingjiang flood diversion area, located in central China near the Yangtze River. Compared with traditional solutions, we believe that this model-driven method is efficient, adaptable and flexible, and thus has bright prospects of application for comprehensive flood risk management.
A web-based system for supporting global land cover data production
NASA Astrophysics Data System (ADS)
Han, Gang; Chen, Jun; He, Chaoying; Li, Songnian; Wu, Hao; Liao, Anping; Peng, Shu
2015-05-01
Global land cover (GLC) data production and verification process is very complicated, time consuming and labor intensive, requiring huge amount of imagery data and ancillary data and involving many people, often from different geographic locations. The efficient integration of various kinds of ancillary data and effective collaborative classification in large area land cover mapping requires advanced supporting tools. This paper presents the design and development of a web-based system for supporting 30-m resolution GLC data production by combining geo-spatial web-service and Computer Support Collaborative Work (CSCW) technology. Based on the analysis of the functional and non-functional requirements from GLC mapping, a three tiers system model is proposed with four major parts, i.e., multisource data resources, data and function services, interactive mapping and production management. The prototyping and implementation of the system have been realised by a combination of Open Source Software (OSS) and commercially available off-the-shelf system. This web-based system not only facilitates the integration of heterogeneous data and services required by GLC data production, but also provides online access, visualization and analysis of the images, ancillary data and interim 30 m global land-cover maps. The system further supports online collaborative quality check and verification workflows. It has been successfully applied to China's 30-m resolution GLC mapping project, and has improved significantly the efficiency of GLC data production and verification. The concepts developed through this study should also benefit other GLC or regional land-cover data production efforts.
GeoBrain Computational Cyber-laboratory for Earth Science Studies
NASA Astrophysics Data System (ADS)
Deng, M.; di, L.
2009-12-01
Computational approaches (e.g., computer-based data visualization, analysis and modeling) are critical for conducting increasingly data-intensive Earth science (ES) studies to understand functions and changes of the Earth system. However, currently Earth scientists, educators, and students have met two major barriers that prevent them from being effectively using computational approaches in their learning, research and application activities. The two barriers are: 1) difficulties in finding, obtaining, and using multi-source ES data; and 2) lack of analytic functions and computing resources (e.g., analysis software, computing models, and high performance computing systems) to analyze the data. Taking advantages of recent advances in cyberinfrastructure, Web service, and geospatial interoperability technologies, GeoBrain, a project funded by NASA, has developed a prototype computational cyber-laboratory to effectively remove the two barriers. The cyber-laboratory makes ES data and computational resources at large organizations in distributed locations available to and easily usable by the Earth science community through 1) enabling seamless discovery, access and retrieval of distributed data, 2) federating and enhancing data discovery with a catalogue federation service and a semantically-augmented catalogue service, 3) customizing data access and retrieval at user request with interoperable, personalized, and on-demand data access and services, 4) automating or semi-automating multi-source geospatial data integration, 5) developing a large number of analytic functions as value-added, interoperable, and dynamically chainable geospatial Web services and deploying them in high-performance computing facilities, 6) enabling the online geospatial process modeling and execution, and 7) building a user-friendly extensible web portal for users to access the cyber-laboratory resources. Users can interactively discover the needed data and perform on-demand data analysis and modeling through the web portal. The GeoBrain cyber-laboratory provides solutions to meet common needs of ES research and education, such as, distributed data access and analysis services, easy access to and use of ES data, and enhanced geoprocessing and geospatial modeling capability. It greatly facilitates ES research, education, and applications. The development of the cyber-laboratory provides insights, lessons-learned, and technology readiness to build more capable computing infrastructure for ES studies, which can meet wide-range needs of current and future generations of scientists, researchers, educators, and students for their formal or informal educational training, research projects, career development, and lifelong learning.
Unified Research on Network-Based Hard/Soft Information Fusion
2016-02-02
types). There are a number of search tree run parameters which must be set depending on the experimental setting. A pilot study was run to identify...Unlimited Final Report: Unified Research on Network-Based Hard/Soft Information Fusion The views, opinions and/or findings contained in this report...Final Report: Unified Research on Network-Based Hard/Soft Information Fusion Report Title The University at Buffalo (UB) Center for Multisource
Young, Bridget; Ward, Jo; Forsey, Mary; Gravenhorst, Katja; Salmon, Peter
2011-10-01
We explored parent-doctor relationships in the care of children with leukaemia from three perspectives simultaneously: parents', doctors' and observers'. Our aim was to investigate convergence and divergence between these perspectives and thereby examine the validity of unitary theory of emotionality and authority in clinical relationships. 33 audiorecorded parent-doctor consultations and separate interviews with parents and doctors, which we analysed qualitatively and from which we selected three prototype cases. Across the whole sample doctors' sense of relationship generally converged with our observations of consultation, but parents' sense of relationship diverged strongly from each. Contrary to current assumptions, parents' sense of emotional connection with doctors did not depend on doctors' emotional behaviour, and parents did not feel disempowered by doctors' authority. Moreover, authority and emotionality were not conceptually distinct for parents, who gained emotional support from doctors' exercise of authority. The relationships looked very different from the three perspectives. These divergences indicate weaknesses in current ideas of emotionality and authority in clinical relationships and the necessity of multisource datasets to develop these ideas in a way that characterises clinical relationships from all perspectives. Methodological development will be needed to address the challenges posed by multisource datasets. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hu, Wenmin; Wang, Zhongcheng; Li, Chunhua; Zhao, Jin; Li, Yi
2018-02-01
Multi-source remote sensing data is rarely used for the comprehensive assessment of land ecologic environment quality. In this study, a digital environmental model was proposed with the inversion algorithm of land and environmental factors based on the multi-source remote sensing data, and a comprehensive index (Ecoindex) was applied to reconstruct and predict the land environment quality of the Dongting Lake Area to assess the effect of human activities on the environment. The main finding was that with the decrease of Grade I and Grade II quality had a decreasing tendency in the lake area, mostly in suburbs and wetlands. Atmospheric water vapour, land use intensity, surface temperature, vegetation coverage, and soil water content were the main driving factors. The cause of degradation was the interference of multi-factor combinations, which led to positive and negative environmental agglomeration effects. Positive agglomeration, such as increased rainfall and vegetation coverage and reduced land use intensity, could increase environmental quality, while negative agglomeration resulted in the opposite. Therefore, reasonable ecological restoration measures should be beneficial to limit the negative effects and decreasing tendency, improve the land ecological environment quality and provide references for macroscopic planning by the government.
A Multisource Approach to Assessing Child Maltreatment From Records, Caregivers, and Children.
Sierau, Susan; Brand, Tilman; Manly, Jody Todd; Schlesier-Michel, Andrea; Klein, Annette M; Andreas, Anna; Garzón, Leonhard Quintero; Keil, Jan; Binser, Martin J; von Klitzing, Kai; White, Lars O
2017-02-01
Practitioners and researchers alike face the challenge that different sources report inconsistent information regarding child maltreatment. The present study capitalizes on concordance and discordance between different sources and probes applicability of a multisource approach to data from three perspectives on maltreatment-Child Protection Services (CPS) records, caregivers, and children. The sample comprised 686 participants in early childhood (3- to 8-year-olds; n = 275) or late childhood/adolescence (9- to 16-year-olds; n = 411), 161 from two CPS sites and 525 from the community oversampled for psychosocial risk. We established three components within a factor-analytic approach: the shared variance between sources on presence of maltreatment (convergence), nonshared variance resulting from the child's own perspective, and the caregiver versus CPS perspective. The shared variance between sources was the strongest predictor of caregiver- and self-reported child symptoms. Child perspective and caregiver versus CPS perspective mainly added predictive strength of symptoms in late childhood/adolescence over and above convergence in the case of emotional maltreatment, lack of supervision, and physical abuse. By contrast, convergence almost fully accounted for child symptoms for failure to provide. Our results suggest consistent information from different sources reporting on maltreatment is, on average, the best indicator of child risk.
Family physician practice visits arising from the Alberta Physician Achievement Review
2013-01-01
Background Licensed physicians in Alberta are required to participate in the Physician Achievement Review (PAR) program every 5 years, comprising multi-source feedback questionnaires with confidential feedback, and practice visits for a minority of physicians. We wished to identify and classify issues requiring change or improvement from the family practice visits, and the responses to advice. Methods Retrospective analysis of narrative practice visit reports data using a mixed methods design to study records of visits to 51 family physicians and general practitioners who participated in PAR during the period 2010 to 2011, and whose ratings in one or more major assessment domains were significantly lower than their peer group. Results Reports from visits to the practices of family physicians and general practitioners confirmed opportunities for change and improvement, with two main groupings – practice environment and physician performance. For 40/51 physicians (78%) suggested actions were discussed with physicians and changes were confirmed. Areas of particular concern included problems arising from practice isolation and diagnostic conclusions being reached with incomplete clinical evidence. Conclusion This study provides additional evidence for the construct validity of a regulatory authority educational program in which multi-source performance feedback identifies areas for practice quality improvement, and change is encouraged by supplementary contact for selected physicians. PMID:24010980
NASA Technical Reports Server (NTRS)
Liu, Nan-Suey
2001-01-01
A multi-disciplinary design/analysis tool for combustion systems is critical for optimizing the low-emission, high-performance combustor design process. Based on discussions between then NASA Lewis Research Center and the jet engine companies, an industry-government team was formed in early 1995 to develop the National Combustion Code (NCC), which is an integrated system of computer codes for the design and analysis of combustion systems. NCC has advanced features that address the need to meet designer's requirements such as "assured accuracy", "fast turnaround", and "acceptable cost". The NCC development team is comprised of Allison Engine Company (Allison), CFD Research Corporation (CFDRC), GE Aircraft Engines (GEAE), NASA Glenn Research Center (LeRC), and Pratt & Whitney (P&W). The "unstructured mesh" capability and "parallel computing" are fundamental features of NCC from its inception. The NCC system is composed of a set of "elements" which includes grid generator, main flow solver, turbulence module, turbulence and chemistry interaction module, chemistry module, spray module, radiation heat transfer module, data visualization module, and a post-processor for evaluating engine performance parameters. Each element may have contributions from several team members. Such a multi-source multi-element system needs to be integrated in a way that facilitates inter-module data communication, flexibility in module selection, and ease of integration. The development of the NCC beta version was essentially completed in June 1998. Technical details of the NCC elements are given in the Reference List. Elements such as the baseline flow solver, turbulence module, and the chemistry module, have been extensively validated; and their parallel performance on large-scale parallel systems has been evaluated and optimized. However the scalar PDF module and the Spray module, as well as their coupling with the baseline flow solver, were developed in a small-scale distributed computing environment. As a result, the validation of the NCC beta version as a whole was quite limited. Current effort has been focused on the validation of the integrated code and the evaluation/optimization of its overall performance on large-scale parallel systems.
Pradhan, Biswajeet; Chaudhari, Amruta; Adinarayana, J; Buchroithner, Manfred F
2012-01-01
In this paper, an attempt has been made to assess, prognosis and observe dynamism of soil erosion by universal soil loss equation (USLE) method at Penang Island, Malaysia. Multi-source (map-, space- and ground-based) datasets were used to obtain both static and dynamic factors of USLE, and an integrated analysis was carried out in raster format of GIS. A landslide location map was generated on the basis of image elements interpretation from aerial photos, satellite data and field observations and was used to validate soil erosion intensity in the study area. Further, a statistical-based frequency ratio analysis was carried out in the study area for correlation purposes. The results of the statistical correlation showed a satisfactory agreement between the prepared USLE-based soil erosion map and landslide events/locations, and are directly proportional to each other. Prognosis analysis on soil erosion helps the user agencies/decision makers to design proper conservation planning program to reduce soil erosion. Temporal statistics on soil erosion in these dynamic and rapid developments in Penang Island indicate the co-existence and balance of ecosystem.
'Big Data' Collaboration: Exploring, Recording and Sharing Enterprise Knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R; Ferrell, Regina Kay
2013-01-01
As data sources and data size proliferate, knowledge discovery from "Big Data" is starting to pose several challenges. In this paper, we address a specific challenge in the practice of enterprise knowledge management while extracting actionable nuggets from diverse data sources of seemingly-related information. In particular, we address the challenge of archiving knowledge gained through collaboration, dissemination and visualization as part of the data analysis, inference and decision-making lifecycle. We motivate the implementation of an enterprise data-discovery and knowledge recorder tool, called SEEKER based on real world case-study. We demonstrate SEEKER capturing schema and data-element relationships, tracking the data elementsmore » of value based on the queries and the analytical artifacts that are being created by analysts as they use the data. We show how the tool serves as digital record of institutional domain knowledge and a documentation for the evolution of data elements, queries and schemas over time. As a knowledge management service, a tool like SEEKER saves enterprise resources and time by avoiding analytic silos, expediting the process of multi-source data integration and intelligently documenting discoveries from fellow analysts.« less
Model-driven development of covariances for spatiotemporal environmental health assessment.
Kolovos, Alexander; Angulo, José Miguel; Modis, Konstantinos; Papantonopoulos, George; Wang, Jin-Feng; Christakos, George
2013-01-01
Known conceptual and technical limitations of mainstream environmental health data analysis have directed research to new avenues. The goal is to deal more efficiently with the inherent uncertainty and composite space-time heterogeneity of key attributes, account for multi-sourced knowledge bases (health models, survey data, empirical relationships etc.), and generate more accurate predictions across space-time. Based on a versatile, knowledge synthesis methodological framework, we introduce new space-time covariance functions built by integrating epidemic propagation models and we apply them in the analysis of existing flu datasets. Within the knowledge synthesis framework, the Bayesian maximum entropy theory is our method of choice for the spatiotemporal prediction of the ratio of new infectives (RNI) for a case study of flu in France. The space-time analysis is based on observations during a period of 15 weeks in 1998-1999. We present general features of the proposed covariance functions, and use these functions to explore the composite space-time RNI dependency. We then implement the findings to generate sufficiently detailed and informative maps of the RNI patterns across space and time. The predicted distributions of RNI suggest substantive relationships in accordance with the typical physiographic and climatologic features of the country.
Earth science big data at users' fingertips: the EarthServer Science Gateway Mobile
NASA Astrophysics Data System (ADS)
Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Fargetta, Marco; Pappalardo, Marco; Rundo, Francesco
2014-05-01
The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. In this contribution we will report on the EarthServer Science Gateway Mobile, an app for both iOS and Android-based devices that allows users to seamlessly access some of the EarthServer applications using SAML-based federated authentication and fine-grained authorisation mechanisms.
Yang, Guanxue; Wang, Lin; Wang, Xiaofan
2017-06-07
Reconstruction of networks underlying complex systems is one of the most crucial problems in many areas of engineering and science. In this paper, rather than identifying parameters of complex systems governed by pre-defined models or taking some polynomial and rational functions as a prior information for subsequent model selection, we put forward a general framework for nonlinear causal network reconstruction from time-series with limited observations. With obtaining multi-source datasets based on the data-fusion strategy, we propose a novel method to handle nonlinearity and directionality of complex networked systems, namely group lasso nonlinear conditional granger causality. Specially, our method can exploit different sets of radial basis functions to approximate the nonlinear interactions between each pair of nodes and integrate sparsity into grouped variables selection. The performance characteristic of our approach is firstly assessed with two types of simulated datasets from nonlinear vector autoregressive model and nonlinear dynamic models, and then verified based on the benchmark datasets from DREAM3 Challenge4. Effects of data size and noise intensity are also discussed. All of the results demonstrate that the proposed method performs better in terms of higher area under precision-recall curve.
Development of a Multi-bus, Multi-source Reconfigurable Stirling Radioisotope Power System Test Bed
NASA Technical Reports Server (NTRS)
Coleman, Anthony S.
2004-01-01
The National Aeronautics and Space Administration (NASA) has typically used Radioisotope Thermoelectric Generators (RTG) as their source of electric power for deep space missions. A more efficient and potentially more cost effective alternative to the RTG, the high efficiency 110 watt Stirling Radioisotope Generator 110 (SRG110) is being developed by the Department of Energy (DOE), Lockheed Martin (LM), Stirling Technology Company (STC) and NASA Glenn Research Center (GRC). The SRG110 consists of two Stirling convertors (Stirling Engine and Linear Alternator) in a dual-opposed configuration, and two General Purpose Heat Source (GPHS) modules. Although Stirling convertors have been successfully operated as a power source for the utility grid and as a stand-alone portable generator, demonstration of the technology required to interconnect two Stirling convertors for a spacecraft power system has not been attempted. NASA GRC is developing a Power System Test Bed (PSTB) to evaluate the performance of a Stirling convertor in an integrated electrical power system application. This paper will describe the status of the PSTB and on-going activities pertaining to the PSTB in the NASA Thermal-Energy Conversion Branch of the Power and On-Board Propulsion Technology Division.
Judicialization 2.0: Understanding right-to-health litigation in real time.
Biehl, João; Socal, Mariana P; Gauri, Varun; Diniz, Debora; Medeiros, Marcelo; Rondon, Gabriela; Amon, Joseph J
2018-05-21
Over the past two decades, debate over the whys, the hows, and the effects of the ever-expanding phenomenon of right-to-health litigation ('judicialization') throughout Latin America have been marked by polarised arguments and limited information. In contrast to claims of judicialization as a positive or negative trend, less attention has been paid to ways to better understand the phenomenon in real time. In this article, we propose a new approach-Judicialization 2.0-that recognises judicialization as an integral part of democratic life. This approach seeks to expand access to information about litigation on access to medicines (and health care generally) in order to better characterise the complexity of the phenomenon and thus inform new research and more robust public discussions. Drawing from our multi-disciplinary perspectives and field experiences in highly judicialized contexts, we thus describe a new multi-source, multi-stakeholder mixed-method approach designed to capture the patterns and heterogeneity of judicialization and understand its medical and socio-political impact in real time, along with its counterfactuals. By facilitating greater data availability and open access, we can drive advancements towards transparent and participatory priority setting, as well as accountability mechanisms that promote quality universal health coverage.
Probabilistic drug connectivity mapping
2014-01-01
Background The aim of connectivity mapping is to match drugs using drug-treatment gene expression profiles from multiple cell lines. This can be viewed as an information retrieval task, with the goal of finding the most relevant profiles for a given query drug. We infer the relevance for retrieval by data-driven probabilistic modeling of the drug responses, resulting in probabilistic connectivity mapping, and further consider the available cell lines as different data sources. We use a special type of probabilistic model to separate what is shared and specific between the sources, in contrast to earlier connectivity mapping methods that have intentionally aggregated all available data, neglecting information about the differences between the cell lines. Results We show that the probabilistic multi-source connectivity mapping method is superior to alternatives in finding functionally and chemically similar drugs from the Connectivity Map data set. We also demonstrate that an extension of the method is capable of retrieving combinations of drugs that match different relevant parts of the query drug response profile. Conclusions The probabilistic modeling-based connectivity mapping method provides a promising alternative to earlier methods. Principled integration of data from different cell lines helps to identify relevant responses for specific drug repositioning applications. PMID:24742351
SU-E-J-56: Static Gantry Digital Tomosynthesis From the Beam’s-Eye-View
DOE Office of Scientific and Technical Information (OSTI.GOV)
Partain, L; Kwon, J; Boyd, D
Purpose We have designed a novel TumoTrak™ x-ray system that delivers 19 distinct kV views with the linac gantry stationary. It images MV treatment beam above and below the patient with a kV tomosysthesis slice image from the therapy beam’s-eye-view. Results will be high quality images without MLC shadowing for notable improvements relative to conventional fluoroscopic MV imaging and fluoroscopic kV imaging. Methods A complete design has a kV electron beam multisource X-ray tube that fits around the MV treatment beam path, with little interference with normal radiotherapy and unblocked by the multi-leaf-collimator. To simulate digital tomosynthesis, we used cone-beammore » CT projection data from a lung SBRT patient. These data were acquired at 125 kVp and 11 fps (0.4 mAs per projection). We chose 19 projections evenly spaced over 27° around one of the treatment angles (240°). Digital tomosynthesis reconstruction of a slice through the tumor was performed using iterative reconstruction. The visibility of the lesion was assessed for the reconstructed digital tomosynthesis (DTS), using fluoroscopy MV images acquired during radiation therapy, and a kV single projection image acquired at the same angle as the treatment field (240°). Results The fluoroscopic DTS images provide the best tumor contrast, surpassing the conventional radiographic and the in-treatment MV portal images. The electron beam multisource X-ray tube design has been completed and the tube is being fabricated. The estimated time to cycle through all 19 projections is 700 ms, enabling high frame-rate imaging. While the initial proposed use case is for image guided and gated treatment delivery, the enhanced imaging will also deliver superior radiographic images for patient setup. Conclusion The proposed device will deliver high quality planar images from the beam’s-eye-view without MLC obstruction. The prototype has been designed and is being assembled with first imaging scheduled for May 2015. L. Partain, J. Kwon, D. Boyd: NIH/SBIR R43CA192489-01. J. Rottmann, G. Zentai, R. Berbeco: NIH/NCI 1R01CA188446-01. R. Berbeco: E. Research Grant, Varian Medical Systems.« less
Estimating Carbon Storage and Sequestration by Urban Trees at Multiple Spatial Resolutions
NASA Astrophysics Data System (ADS)
Wu, J.; Tran, A.; Liao, A.
2010-12-01
Urban forests are an important component of urban-suburban environments. Urban trees provide not only a full range of social and psychological benefits to city dwellers, but also valuable ecosystem services to communities, such as removing atmospheric carbon dioxide, improving air quality, and reducing storm water runoff. There is an urgent need for developing strategic conservation plans for environmentally sustainable urban-suburban development based on the scientific understanding of the extent and function of urban forests. However, several challenges remain to accurately quantify various environmental benefits provided by urban trees, among which is to deal with the effect of changing spatial resolution and/or scale. In this study, we intended to examine the uncertainties of carbon storage and sequestration associated with the tree canopy coverage of different spatial resolutions. Multi-source satellite imagery data were acquired for the City of Fullerton, located in Orange County of California. The tree canopy coverage of the study area was classified at three spatial resolutions, ranging from 30 m (Landsat-5 Thematic Mapper), 15 m (Advanced Spaceborne Thermal Emission and Reflection Radiometer), to 2.5 m (QuickBird). We calculated the amount of carbon stored in the trees represented on the individual tree coverage maps and the annual carbon taken up by the trees with a model (i.e., CITYgreen) developed by the U.S. Forest Service. The results indicate that urban trees account for significant proportions of land cover in the study area even with the low spatial resolution data. The estimated carbon fixation benefits vary greatly depending on the details of land use and land cover classification. The extrapolation of estimation from the fine-resolution stand-level to the low-resolution landscape-scale will likely not preserve reasonable accuracy.
NASA Astrophysics Data System (ADS)
Huang, Q.; Long, D.; Du, M.; Hong, Y.
2017-12-01
River discharge is among the most important hydrological variables of hydrologists' concern, as it links drinking water supply, irrigation, and flood forecast together. Despite its importance, there are extremely limited gauging stations across most of alpine regions such as the Tibetan Plateau (TP) known as Asia's water towers. Use of remote sensing combined with partial in situ discharge measurements is a promising way of retrieving river discharge over ungauged or poorly gauged basins. Successful discharge estimation depends largely on accurate water width (area) and water level, but it is challenging to obtain these variables for alpine regions from a single satellite platform due to narrow river channels, complex terrain, and limited observations. Here, we used high-spatial-resolution images from Landsat series to derive water area, and satellite altimetry (Jason 2) to derive water level for the Upper Brahmaputra River (UBR) in the TP with narrow river width (less than 400 m in most occasions). We performed waveform retracking using a 50% Threshold and Ice-1 Combined algorithm (TIC) developed in this study to obtain accurate water level measurements. The discharge was estimated well using a range of derived formulas including the power function between water level and discharge, and that between water area and discharge suitable for the triangular cross-section around the Nuxia gauging station in the UBR. Results showed that the power function using Jason 2-derived water levels after performing waveform retracking performed best, showing an overall NSE value of 0.92. The proposed approach for remotely sensed river discharge is effective in the UBR and possibly other alpine rivers globally.
Lu, Xiaoman; Zheng, Guang; Miller, Colton; Alvarado, Ernesto
2017-09-08
Monitoring and understanding the spatio-temporal variations of forest aboveground biomass (AGB) is a key basis to quantitatively assess the carbon sequestration capacity of a forest ecosystem. To map and update forest AGB in the Greater Khingan Mountains (GKM) of China, this work proposes a physical-based approach. Based on the baseline forest AGB from Landsat Enhanced Thematic Mapper Plus (ETM+) images in 2008, we dynamically updated the annual forest AGB from 2009 to 2012 by adding the annual AGB increment (ABI) obtained from the simulated daily and annual net primary productivity (NPP) using the Boreal Ecosystem Productivity Simulator (BEPS) model. The 2012 result was validated by both field- and aerial laser scanning (ALS)-based AGBs. The predicted forest AGB for 2012 estimated from the process-based model can explain 31% ( n = 35, p < 0.05, RMSE = 2.20 kg/m²) and 85% ( n = 100, p < 0.01, RMSE = 1.71 kg/m²) of variation in field- and ALS-based forest AGBs, respectively. However, due to the saturation of optical remote sensing-based spectral signals and contribution of understory vegetation, the BEPS-based AGB tended to underestimate/overestimate the AGB for dense/sparse forests. Generally, our results showed that the remotely sensed forest AGB estimates could serve as the initial carbon pool to parameterize the process-based model for NPP simulation, and the combination of the baseline forest AGB and BEPS model could effectively update the spatiotemporal distribution of forest AGB.
Lu, Xiaoman; Zheng, Guang; Miller, Colton
2017-01-01
Monitoring and understanding the spatio-temporal variations of forest aboveground biomass (AGB) is a key basis to quantitatively assess the carbon sequestration capacity of a forest ecosystem. To map and update forest AGB in the Greater Khingan Mountains (GKM) of China, this work proposes a physical-based approach. Based on the baseline forest AGB from Landsat Enhanced Thematic Mapper Plus (ETM+) images in 2008, we dynamically updated the annual forest AGB from 2009 to 2012 by adding the annual AGB increment (ABI) obtained from the simulated daily and annual net primary productivity (NPP) using the Boreal Ecosystem Productivity Simulator (BEPS) model. The 2012 result was validated by both field- and aerial laser scanning (ALS)-based AGBs. The predicted forest AGB for 2012 estimated from the process-based model can explain 31% (n = 35, p < 0.05, RMSE = 2.20 kg/m2) and 85% (n = 100, p < 0.01, RMSE = 1.71 kg/m2) of variation in field- and ALS-based forest AGBs, respectively. However, due to the saturation of optical remote sensing-based spectral signals and contribution of understory vegetation, the BEPS-based AGB tended to underestimate/overestimate the AGB for dense/sparse forests. Generally, our results showed that the remotely sensed forest AGB estimates could serve as the initial carbon pool to parameterize the process-based model for NPP simulation, and the combination of the baseline forest AGB and BEPS model could effectively update the spatiotemporal distribution of forest AGB. PMID:28885556
1980-12-01
92626. I DECEMBER 1980 APPROVED FOR PUBLIC RELEASE: DISTRIBUTION UNLIMITED Prepared for U.S. ARMY CORPS OF ENGINEERS ENGINEER TOPOGRAPHIC LABORATORIES J...CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE December 1980 U. S. Army Engineer Topographic Laboratories 13. NUMBER OF PAGES Fort Belvoir...infrared and panchromatic imagery was collected by the Oregon Army National Guard at the Corvallis, Oregon, test site on 13 and 19 August 1980 . Ground f
Multi-Source Fusion for Explosive Hazard Detection in Forward Looking Sensors
2016-12-01
include; (1) Investigating (a) thermal, (b) synthetic aperture acoustics ( SAA ) and (c) voxel space Radar for buried and side threat attacks. (2...detection. (3) With respect to SAA , we developed new approaches in the time and frequency domains for analyzing signature of concealed targets (called...Fraz). We also developed a method to extract a multi-spectral signature from SAA and deep learning was used on limited training and class imbalance
DiMasi, Joseph A; Smith, Zachary; Getz, Kenneth A
2018-05-10
The extent to which new drug developers can benefit financially from shorter development times has implications for development efficiency and innovation incentives. We provided a real-world example of such gains by using recent estimates of drug development costs and returns. Time and fee data were obtained on 5 single-source manufacturing projects. Time and fees were modeled for these projects as if the drug substance and drug product processes had been contracted separately from 2 vendors. The multi-vendor model was taken as the base case, and financial impacts from single-source contracting were determined relative to the base case. The mean and median after-tax financial benefits of shorter development times from single-source contracting were $44.7 million and $34.9 million, respectively (2016 dollars). The after-tax increases in sponsor fees from single-source contracting were small in comparison (mean and median of $0.65 million and $0.25 million). For the data we examined, single-source contracting yielded substantial financial benefits over multi-source contracting, even after accounting for somewhat higher sponsor fees. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.
MSWEP V2 global 3-hourly 0.1° precipitation: methodology and quantitative appraisal
NASA Astrophysics Data System (ADS)
Beck, H.; Yang, L.; Pan, M.; Wood, E. F.; William, L.
2017-12-01
Here, we present Multi-Source Weighted-Ensemble Precipitation (MSWEP) V2, the first fully global gridded precipitation (P) dataset with a 0.1° spatial resolution. The dataset covers the period 1979-2016, has a 3-hourly temporal resolution, and was derived by optimally merging a wide range of data sources based on gauges (WorldClim, GHCN-D, GSOD, and others), satellites (CMORPH, GridSat, GSMaP, and TMPA 3B42RT), and reanalyses (ERA-Interim, JRA-55, and NCEP-CFSR). MSWEP V2 implements some major improvements over V1, such as (i) the correction of distributional P biases using cumulative distribution function matching, (ii) increasing the spatial resolution from 0.25° to 0.1°, (iii) the inclusion of ocean areas, (iv) the addition of NCEP-CFSR P estimates, (v) the addition of thermal infrared-based P estimates for the pre-TRMM era, (vi) the addition of 0.1° daily interpolated gauge data, (vii) the use of a daily gauge correction scheme that accounts for regional differences in the 24-hour accumulation period of gauges, and (viii) extension of the data record to 2016. The gauge-based assessment of the reanalysis and satellite P datasets, necessary for establishing the merging weights, revealed that the reanalysis datasets strongly overestimate the P frequency for the entire globe, and that the satellite (resp. reanalysis) datasets consistently performed better at low (high) latitudes. Compared to other state-of-the-art P datasets, MSWEP V2 exhibits more plausible global patterns in mean annual P, percentiles, and annual number of dry days, and better resolves the small-scale variability over topographically complex terrain. Other P datasets appear to consistently underestimate P amounts over mountainous regions. Long-term mean P estimates for the global, land, and ocean domains based on MSWEP V2 are 959, 796, and 1026 mm/yr, respectively, in close agreement with the best previous published estimates.
Ye, Hongqiang; Ma, Qijun; Hou, Yuezhong; Li, Man; Zhou, Yongsheng
2017-12-01
Digital techniques are not clinically applied for 1-piece maxillary prostheses containing an obturator and removable partial denture retained by the remaining teeth because of the difficulty in obtaining sufficiently accurate 3-dimensional (3D) images. The purpose of this pilot clinical study was to generate 3D digital casts of maxillary defects, including the defective region and the maxillary dentition, based on multisource data registration and to evaluate their effectiveness. Twelve participants with maxillary defects were selected. The maxillofacial region was scanned with spiral computer tomography (CT), and the maxillary arch and palate were scanned using an intraoral optical scanner. The 3D images from the CT and intraoral scanner were registered and merged to form a 3D digital cast of the maxillary defect containing the anatomic structures needed for the maxillary prosthesis. This included the defect cavity, maxillary dentition, and palate. Traditional silicone impressions were also made, and stone casts were poured. The accuracy of the digital cast in comparison with that of the stone cast was evaluated by measuring the distance between 4 anatomic landmarks. Differences and consistencies were assessed using paired Student t tests and the intraclass correlation coefficient (ICC). In 3 participants, physical resin casts were produced by rapid prototyping from digital casts. Based on the resin casts, maxillary prostheses were fabricated by using conventional methods and then evaluated in the participants to assess the clinical applicability of the digital casts. Digital casts of the maxillary defects were generated and contained all the anatomic details needed for the maxillary prosthesis. Comparing the digital and stone casts, a paired Student t test indicated that differences in the linear distances between landmarks were not statistically significant (P>.05). High ICC values (0.977 to 0.998) for the interlandmark distances further indicated the high degree of consistency between the digital and stone casts. The maxillary prostheses showed good clinical effectiveness, indicating that the corresponding digital casts met the requirements for clinical application. Based on multisource data from spiral CT and the intraoral scanner, 3D digital casts of maxillary defects were generated using the registration technique. These casts were consistent with conventional stone casts in terms of accuracy and were suitable for clinical use. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Orientation of airborne laser scanning point clouds with multi-view, multi-scale image blocks.
Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik
2009-01-01
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters.
NASA Astrophysics Data System (ADS)
Shen, Yan-Jun; Shen, Yanjun; Fink, Manfred; Kralisch, Sven; Brenning, Alexander
2018-01-01
Understanding the water balance, especially as it relates to the distribution of runoff components, is crucial for water resource management and coping with the impacts of climate change. However, hydrological processes are poorly known in mountainous regions due to data scarcity and the complex dynamics of snow and glaciers. This study aims to provide a quantitative comparison of gridded precipitation products in the Tianshan Mountains, located in Central Asia and in order to further understand the mountain hydrology and distribution of runoff components in the glacierized Kaidu Basin. We found that gridded precipitation products are affected by inconsistent biases based on a spatiotemporal comparison with the nearest weather stations and should be evaluated with caution before using them as boundary conditions in hydrological modeling. Although uncertainties remain in this data-scarce basin, driven by field survey data and bias-corrected gridded data sets (ERA-Interim and APHRODITE), the water balance and distribution of runoff components can be plausibly quantified based on the distributed hydrological model (J2000). We further examined parameter sensitivity and uncertainty with respect to both simulated streamflow and different runoff components based on an ensemble of simulations. This study demonstrated the possibility of integrating gridded products in hydrological modeling. The methodology used can be important for model applications and design in other data-scarce mountainous regions. The model-based simulation quantified the water balance and how the water resources are partitioned throughout the year in Tianshan Mountain basins, although the uncertainties present in this study result in important limitations.
Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks
Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik
2009-01-01
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters. PMID:22454569
Liu, Huanjun; Huffman, Ted; Liu, Jiangui; Li, Zhe; Daneshfar, Bahram; Zhang, Xinle
2015-01-01
Understanding agricultural ecosystems and their complex interactions with the environment is important for improving agricultural sustainability and environmental protection. Developing the necessary understanding requires approaches that integrate multi-source geospatial data and interdisciplinary relationships at different spatial scales. In order to identify and delineate landscape units representing relatively homogenous biophysical properties and eco-environmental functions at different spatial scales, a hierarchical system of uniform management zones (UMZ) is proposed. The UMZ hierarchy consists of seven levels of units at different spatial scales, namely site-specific, field, local, regional, country, continent, and globe. Relatively few studies have focused on the identification of the two middle levels of units in the hierarchy, namely the local UMZ (LUMZ) and the regional UMZ (RUMZ), which prevents true eco-environmental studies from being carried out across the full range of scales. This study presents a methodology to delineate LUMZ and RUMZ spatial units using land cover, soil, and remote sensing data. A set of objective criteria were defined and applied to evaluate the within-zone homogeneity and between-zone separation of the delineated zones. The approach was applied in a farming and forestry region in southeastern Ontario, Canada, and the methodology was shown to be objective, flexible, and applicable with commonly available spatial data. The hierarchical delineation of UMZs can be used as a tool to organize the spatial structure of agricultural landscapes, to understand spatial relationships between cropping practices and natural resources, and to target areas for application of specific environmental process models and place-based policy interventions.
NASA Astrophysics Data System (ADS)
Radło-Kulisiewicz, M.
2015-12-01
The practical importance of Geographical Information Systems in urban planning and managing of urban areas is becoming much more explicit. Managing small cities usually needs simple GIS spatial analysis tools to support planners' decisions. Otherwise, the urban dynamic is bigger and factors affecting changes in city are combined. These analyses are not sufficient and then a need for more advanced and sophisticated solutions can appear. The aim of this article is to introduce popular techniques for urban modelling and underlying importance of GIS as an environment for creating simple models, which let t easy decisions in creating vision of a city be taken. The Article touches on the following issues related to the planning and management of urban space; from the applicable standards concerning materials planning in Poland, through the possibilities that give us network solutions useful at the municipal and country level, to existing techniques in modelling cities in the world. The background for these questions are the Geographical Information Systems (their role in this respect), that naturally fit into this theme. The ability to analyze multi-source data at different levels of detail, in different variants and ranges, predispose the GIS to environmental urban management. While also taking into account social - economic factors, integrated with GIS predictive modeling techniques, allows us to understand dependencies that navigate complex urban phenomena. City management in an integrated and thoughtful manner and will reduce the costs associated with the expansion of the urban fabric and avoid the chaos of urban development.
Advanced techniques for the storage and use of very large, heterogeneous spatial databases
NASA Technical Reports Server (NTRS)
Peuquet, Donna J.
1987-01-01
Progress is reported in the development of a prototype knowledge-based geographic information system. The overall purpose of this project is to investigate and demonstrate the use of advanced methods in order to greatly improve the capabilities of geographic information system technology in the handling of large, multi-source collections of spatial data in an efficient manner, and to make these collections of data more accessible and usable for the Earth scientist.
On the role of differenced phase-delays in high-precision wide-field multi-source astrometry
NASA Astrophysics Data System (ADS)
Martí-Vidal, I.; Marcaide, J. M.; Guirado, J. C.
2007-07-01
Phase-delay is, by far, the most precise observable used in interferometry. In typical very-long-baseline-interferometry (VLBI) observations, the uncertainties of the phase-delays can be about 100 times smaller than those of the group delays. However, the phase-delays have an important handicap: they are ambiguous, since they are computed from the relative phases of the signals of the different antennas, and an indeterminate number of complete 2¶- cycles can be added to those phases leaving them unchanged. There are different approaches to solve the ambiguity problem of the phase delays (Shapiro et al., 1979; Beasley & Conway, 1995), but none of them has been ever used in observations involving more than 2.3 sources. In this contribution, we will report for the first-time wide-field multi-source astrometric analysis that has been performed on a complete set of radio sources using the phase-delay observable. The target of our analysis is the S5 polar cap sample, consisting on 13 bright ICRF sources near the North Celestial Pole. We have developed new algorithms and updated existing software to correct, in an automatic way, the ambiguities of the phase-delay and, therefore, perform a phasedelay astrometric analysis of all the sources in the sample. We will also discuss on the impact of the use of phase-delays in the astrometric precision.
Technical note: A linear model for predicting δ13 Cprotein.
Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M
2015-08-01
Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2) = 0.86, P < 0.01), and experimentally generated error terms of ±1.9% for any predicted individual value of δ(13) Cprotein . This model was tested using isotopic data from Formative Period individuals from northern Chile's Atacama Desert. The model presented here appears to hold significant potential for the prediction of the carbon isotope signature of dietary protein using only such data as is routinely generated in the course of stable isotope analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.
In-vehicle group activity modeling and simulation in sensor-based virtual environment
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Telagamsetti, Durga; Poshtyar, Azin; Chan, Alex; Hu, Shuowen
2016-05-01
Human group activity recognition is a very complex and challenging task, especially for Partially Observable Group Activities (POGA) that occur in confined spaces with limited visual observability and often under severe occultation. In this paper, we present IRIS Virtual Environment Simulation Model (VESM) for the modeling and simulation of dynamic POGA. More specifically, we address sensor-based modeling and simulation of a specific category of POGA, called In-Vehicle Group Activities (IVGA). In VESM, human-alike animated characters, called humanoids, are employed to simulate complex in-vehicle group activities within the confined space of a modeled vehicle. Each articulated humanoid is kinematically modeled with comparable physical attributes and appearances that are linkable to its human counterpart. Each humanoid exhibits harmonious full-body motion - simulating human-like gestures and postures, facial impressions, and hands motions for coordinated dexterity. VESM facilitates the creation of interactive scenarios consisting of multiple humanoids with different personalities and intentions, which are capable of performing complicated human activities within the confined space inside a typical vehicle. In this paper, we demonstrate the efficiency and effectiveness of VESM in terms of its capabilities to seamlessly generate time-synchronized, multi-source, and correlated imagery datasets of IVGA, which are useful for the training and testing of multi-source full-motion video processing and annotation. Furthermore, we demonstrate full-motion video processing of such simulated scenarios under different operational contextual constraints.
Developing a multisource feedback tool for postgraduate medical educational supervisors.
Archer, Julian; Swanwick, Tim; Smith, Daniel; O'Keeffe, Catherine; Cater, Nerys
2013-01-01
Supervisors play a key role in the development of postgraduate medical trainees both in the oversight of their day-to-day clinical practice but also in the support of their learning experiences. In the UK, there has been a clear distinction made between these two activities. In this article, we report on the development of a web-based multisource feedback (MSF) tool for educational supervisors in the London Deanery, an organisation responsible for 20% of the UK's doctors and dentists in training. A narrative review of the literature generated a question framework for a series of focus groups. Data were analysed using an interpretative thematic approach and the resulting instrument piloted online. Instrument performance was analysed using a variety of tools including factor analysis, generalisability theory and analysis of performance in the first year of implementation. Two factors were initially identified. Three questions performed inadequately and were subsequently discarded. Educational supervisors scored well, generally rating themselves lower than they were by their trainees. The instrument was launched in July 2010, requiring five respondents to generate a summated report, with further validity evidence collated over the first year if implementation. Arising out of a robust development process, the London Deanery MSF instrument for educational supervisors is a tool that demonstrates considerable evidence of validity and can provide supervisors with useful evidence of their effectiveness.
NASA Technical Reports Server (NTRS)
Kim, Hakil; Swain, Philip H.
1990-01-01
An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.
Trends of communication skills education in medical schools.
Han, Hong Hee; Kim, Sun
2009-03-01
To investigate the past and current status of teaching communication skills in undergraduate medical education and to review how medical education is progressing. A selective search was conducted of the literature that was published from 1960 to Jun 2008 in the MEDLINE, EMBASE, ERIC, Psychlnfo, and KMbase databases using "communication." All articles in 13 medical journals (including Academic Medicine, Medical Education, Teaching and Learning in Medicine, Medical Teacher, and Korean Journal of Medical Education) were reviewed. Each article was categorized according to 5 subjects (curriculum, methods, assessment, student factors, and research type). A total of 306 studies met the inclusion criteria for this study. Curriculum was the most frequent subject (n=85), followed by assessment (n=71), student factors (n=48), and methods (n=23). According to this research, the current trends of teaching communication skills in medical school are characterized by curriculum development,' 'blended-methods,' 'multisource assessment,' 'student attitudes,' and 'comparative studies' of education. It is time to figure it out optimistic ways to design a formal course. Now, 4 current trends in teaching and learning are emerging in communication skills. Curriculum development is stabilizing a variety of teaching methods are being adopted; a method of multisource assessment is being identified and the need to consider student attitudesis being recognized. In the near future, objective, comprehensive, and sophisticated evaluation is going to be the top priority in teaching communication skills with a variety of research types.
Space Flight Middleware: Remote AMS over DTN for Delay-Tolerant Messaging
NASA Technical Reports Server (NTRS)
Burleigh, Scott
2011-01-01
This paper describes a technique for implementing scalable, reliable, multi-source multipoint data distribution in space flight communications -- Delay-Tolerant Reliable Multicast (DTRM) -- that is fully supported by the "Remote AMS" (RAMS) protocol of the Asynchronous Message Service (AMS) proposed for standardization within the Consultative Committee for Space Data Systems (CCSDS). The DTRM architecture enables applications to easily "publish" messages that will be reliably and efficiently delivered to an arbitrary number of "subscribing" applications residing anywhere in the space network, whether in the same subnet or in a subnet on a remote planet or vehicle separated by many light minutes of interplanetary space. The architecture comprises multiple levels of protocol, each included for a specific purpose and allocated specific responsibilities: "application AMS" traffic performs end-system data introduction and delivery subject to access control; underlying "remote AMS" directs this application traffic to populations of recipients at remote locations in a multicast distribution tree, enabling the architecture to scale up to large networks; further underlying Delay-Tolerant Networking (DTN) Bundle Protocol (BP) advances RAMS protocol data units through the distribution tree using delay-tolerant storeand- forward methods; and further underlying reliable "convergence-layer" protocols ensure successful data transfer over each segment of the end-to-end route. The result is scalable, reliable, delay-tolerant multi-source multicast that is largely self-configuring.
NASA Astrophysics Data System (ADS)
Guarnieri, A.; Masiero, A.; Piragnolo, M.; Pirotti, F.; Vettore, A.
2016-06-01
In this paper we present the results of the development of a Web-based archiving and documenting system aimed to the management of multisource and multitemporal data related to cultural heritage. As case study we selected the building complex of Villa Revedin Bolasco in Castefranco Veneto (Treviso, Italy) and its park. Buildings and park were built in XIX century after several restorations of the original XIV century area. The data management system relies on a geodatabase framework, in which different kinds of datasets were stored. More specifically, the geodatabase elements consist of historical information, documents, descriptions of artistic characteristics of the building and the park, in the form of text and images. In addition, we used also floorplans, sections and views of the outer facades of the building extracted by a TLS-based 3D model of the whole Villa. In order to manage and explore these rich dataset, we developed a geodatabase using PostgreSQL and PostGIS as spatial plugin. The Web-GIS platform, based on HTML5 and PHP programming languages, implements the NASA Web World Wind virtual globe, a 3D virtual globe we used to enable the navigation and interactive exploration of the park. Furthermore, through a specific timeline function, the user can explore the historical evolution of the building complex.
Somatic Consequences and Symptomatic Responses to Stress: Directions for Future Research
1999-07-01
endeavors, some early work in developing multi- method , multi-source assessment approaches for identifying cases of PTSD; some clinical studies ... research dealing with the entire concept of the cultural shaping of what he calls the illness narrative and the way in which this tends to control the...talk for five to ten minutes about the pattern of the research you’ve been doing and the directions it’s been going in and the directions you think it
Pattern recognition methods and air pollution source identification. [based on wind direction
NASA Technical Reports Server (NTRS)
Leibecki, H. F.; King, R. B.
1978-01-01
Directional air samplers, used for resolving suspended particulate matter on the basis of time and wind direction were used to assess the feasibility of characterizing and identifying emission source types in urban multisource environments. Filters were evaluated for 16 elements and X-ray fluorescence methods yielded elemental concentrations for direction, day, and the interaction of direction and day. Large numbers of samples are necessary to compensate for large day-to-day variations caused by wind perturbations and/or source changes.
NASA Astrophysics Data System (ADS)
Beck, H.; Vergopolan, N.; Pan, M.; Levizzani, V.; van Dijk, A.; Weedon, G. P.; Brocca, L.; Huffman, G. J.; Wood, E. F.; William, L.
2017-12-01
We undertook a comprehensive evaluation of 22 gridded (quasi-)global (sub-)daily precipitation (P) datasets for the period 2000-2016. Twelve non-gauge-corrected P datasets were evaluated using daily P gauge observations from 76,086 gauges worldwide. Another ten gauge-corrected ones were evaluated using hydrological modeling, by calibrating the conceptual model HBV against streamflow records for each of 9053 small to medium-sized (<50,000 km2) catchments worldwide, and comparing the resulting performance. Marked differences in spatio-temporal patterns and accuracy were found among the datasets. Among the uncorrected P datasets, the satellite- and reanalysis-based MSWEP-ng V1.2 and V2.0 datasets generally showed the best temporal correlations with the gauge observations, followed by the reanalyses (ERA-Interim, JRA-55, and NCEP-CFSR), the estimates based primarily on passive microwave remote sensing of rainfall (CMORPH V1.0, GSMaP V5/6, and TMPA 3B42RT V7) or near-surface soil moisture (SM2RAIN-ASCAT), and finally, estimates based primarily on thermal infrared imagery (GridSat V1.0, PERSIANN, and PERSIANN-CCS). Two of the three reanalyses (ERA-Interim and JRA-55) unexpectedly obtained lower trend errors than the satellite datasets. Among the corrected P datasets, the ones directly incorporating daily gauge data (CPC Unified and MSWEP V1.2 and V2.0) generally provided the best calibration scores, although the good performance of the fully gauge-based CPC Unified is unlikely to translate to sparsely or ungauged regions. Next best results were obtained with P estimates directly incorporating temporally coarser gauge data (CHIRPS V2.0, GPCP-1DD V1.2, TMPA 3B42 V7, and WFDEI-CRU), which in turn outperformed those indirectly incorporating gauge data through other multi-source datasets (PERSIANN-CDR V1R1 and PGF). Our results highlight large differences in estimation accuracy, and hence, the importance of P dataset selection in both research and operational applications. The good performance of MSWEP emphasizes that careful data merging can exploit the complementary strengths of gauge-, satellite- and reanalysis-based P estimates.
van Dijke, Marius; De Cremer, David; Langendijk, Gerben; Anderson, Cameron
2018-02-01
Research shows that power can lead to prosocial behavior by facilitating the behavioral expression of dispositional prosocial motivation. However, it is not clear how power may facilitate responses to contextual factors that promote prosocial motivation. Integrating fairness heuristic theory and the situated focus theory of power, we argue that in particular, organization members in lower (vs. higher) hierarchical positions who simultaneously experience a high (vs. low) sense of power respond with prosocial behavior to 1 important antecedent of prosocial motivation, that is, the enactment of procedural justice. The results from a multisource survey among employees and their leaders from various organizations (Study 1) and an experiment using a public goods dilemma (Study 2) support this prediction. Three subsequent experiments (Studies 3-5) show that this effect is mediated by perceptions of authority trustworthiness. Taken together, this research (a) helps resolve the debate regarding whether power promotes or undermines prosocial behavior, (b) demonstrates that hierarchical position and the sense of power can have very different effects on processes that are vital to the functioning of an organization, and (c) helps solve ambiguity regarding the roles of hierarchical position and power in fairness heuristic theory. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Luo, Jieqiong; Zhou, Tinggang; Du, Peijun; Xu, Zhigang
2018-01-01
With rapid environmental degeneration and socio-economic development, the human settlement environment (HSE) has experienced dramatic changes and attracted attention from different communities. Consequently, the spatial-temporal evaluation of natural suitability of the human settlement environment (NSHSE) has become essential for understanding the patterns and dynamics of HSE, and for coordinating sustainable development among regional populations, resources, and environments. This study aims to explore the spatialtemporal evolution of NSHSE patterns in 1997, 2005, and 2009 in Fengjie County near the Three Gorges Reservoir Area (TGRA). A spatially weighted NSHSE model was established by integrating multi-source data (e.g., census data, meteorological data, remote sensing images, DEM data, and GIS data) into one framework, where the Ordinary Least Squares (OLS) linear regression model was applied to calculate the weights of indices in the NSHSE model. Results show that the trend of natural suitability has been first downward and then upward, which is evidenced by the disparity of NSHSE existing in the south, north, and central areas of Fengjie County. Results also reveal clustered NSHSE patterns for all 30 townships. Meanwhile, NSHSE has significant influence on population distribution, and 71.49% of the total population is living in moderate and high suitable districts.
Doctor coach: a deliberate practice approach to teaching and learning clinical skills.
Gifford, Kimberly A; Fall, Leslie H
2014-02-01
The rapidly evolving medical education landscape requires restructuring the approach to teaching and learning across the continuum of medical education. The deliberate practice strategies used to coach learners in disciplines beyond medicine can also be used to train medical learners. However, these deliberate practice strategies are not explicitly taught in most medical schools or residencies. The authors designed the Doctor Coach framework and competencies in 2007-2008 to serve as the foundation for new faculty development and resident-as-teacher programs. In addition to teaching deliberate practice strategies, the programs model a deliberate practice approach that promotes the continuous integration of newly developed coaching competencies by participants into their daily teaching practice. Early evaluation demonstrated the feasibility and efficacy of implementing the Doctor Coach framework across the continuum of medical education. Additionally, the Doctor Coach framework has been disseminated through national workshops, which have resulted in additional institutions applying the framework and competencies to develop their own coaching programs. Design of a multisource evaluation tool based on the coaching competencies will enable more rigorous study of the Doctor Coach framework and training programs and provide a richer feedback mechanism for participants. The framework will also facilitate the faculty development needed to implement the milestones and entrustable professional activities in medical education.
Tracking Vessels to Illegal Pollutant Discharges Using Multisource Vessel Information
NASA Astrophysics Data System (ADS)
Busler, J.; Wehn, H.; Woodhouse, L.
2015-04-01
Illegal discharge of bilge waters is a significant source of oil and other environmental pollutants in Canadian and international waters. Imaging satellites are commonly used to monitor large areas to detect oily discharges from vessels, off-shore platforms and other sources. While remotely sensed imagery provides a snap-shot picture useful for detecting a spill or the presence of vessels in the vicinity, it is difficult to directly associate a vessel to an observed spill unless the vessel is observed while the discharge is occurring. The situation then becomes more challenging with increased vessel traffic as multiple vessels may be associated with a spill event. By combining multiple sources of vessel location data, such as Automated Information Systems (AIS), Long Range Identification and Tracking (LRIT) and SAR-based ship detection, with spill detections and drift models we have created a system that associates detected spill events with vessels in the area using a probabilistic model that intersects vessel tracks and spill drift trajectories in both time and space. Working with the Canadian Space Agency and the Canadian Ice Service's Integrated Satellite Tracking of Pollution (ISTOP) program, we use spills observed in Canadian waters to demonstrate the investigative value of augmenting spill detections with temporally sequenced vessel and spill tracking information.
Mao, Nini; Liu, Yunting; Chen, Kewei; Yao, Li; Wu, Xia
2018-06-05
Multiple neuroimaging modalities have been developed providing various aspects of information on the human brain. Used together and properly, these complementary multimodal neuroimaging data integrate multisource information which can facilitate a diagnosis and improve the diagnostic accuracy. In this study, 3 types of brain imaging data (sMRI, FDG-PET, and florbetapir-PET) were fused in the hope to improve diagnostic accuracy, and multivariate methods (logistic regression) were applied to these trimodal neuroimaging indices. Then, the receiver-operating characteristic (ROC) method was used to analyze the outcomes of the logistic classifier, with either each index, multiples from each modality, or all indices from all 3 modalities, to investigate their differential abilities to identify the disease. With increasing numbers of indices within each modality and across modalities, the accuracy of identifying Alzheimer disease (AD) increases to varying degrees. For example, the area under the ROC curve is above 0.98 when all the indices from the 3 imaging data types are combined. Using a combination of different indices, the results confirmed the initial hypothesis that different biomarkers were potentially complementary, and thus the conjoint analysis of multiple information from multiple sources would improve the capability to identify diseases such as AD and mild cognitive impairment. © 2018 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Tonini, R.; Anita, G.
2011-12-01
In both worldwide and regional historical catalogues, most of the tsunamis are caused by earthquakes and a minor percentage is represented by all the other non-seismic sources. On the other hand, tsunami hazard and risk studies are often applied to very specific areas, where this global trend can be different or even inverted, depending on the kind of potential tsunamigenic sources which characterize the case study. So far, few probabilistic approaches consider the contribution of landslides and/or phenomena derived by volcanic activity, i.e. pyroclastic flows and flank collapses, as predominant in the PTHA, also because of the difficulties to estimate the correspondent recurrence time. These considerations are valid, for example, for the city of Naples, Italy, which is surrounded by a complex active volcanic system (Vesuvio, Campi Flegrei, Ischia) that presents a significant number of potential tsunami sources of non-seismic origin compared to the seismic ones. In this work we present the preliminary results of a probabilistic multi-source tsunami hazard assessment applied to Naples. The method to estimate the uncertainties will be based on Bayesian inference. This is the first step towards a more comprehensive task which will provide a tsunami risk quantification for this town in the frame of the Italian national project ByMuR (http://bymur.bo.ingv.it). This three years long ongoing project has the final objective of developing a Bayesian multi-risk methodology to quantify the risk related to different natural hazards (volcanoes, earthquakes and tsunamis) applied to the city of Naples.
NASA Astrophysics Data System (ADS)
Zhang, S.; Li, H.
2017-12-01
The changes of glacier area, ice surface elevation and ice storage in the upper reaches of the Shule River Basin were investigated by the Landsat TM series SRTM and stereo image pairs of Third Resources Satellite (ZY-3)from 2000 to 2015. There are 510 glaciers with areas large than 0.01 km2 in 2015, and the glacier area is 435 km2 in the upper reach of Shule River basin. 96 glaciers were disappeared from 2000 to 2015, and the total glacier area decreased by 57.6±2.68km2 (11.7 %). After correcting the elevation difference between ZY-3 DEM and SRTM and aspect, we found that the average ice surface elevation of glaciers reduced by 2.58±0.6m from 2000 to 2015 , with average reduction 0.172 ±0.04m a-1, and the ice storage reduced by 1.277±0.311km3. Elevation variation of ice surface in different sub-regions reflects the complexity of glacier change. The ice storage change calculated from the sum of single glacier area-volume relationship is glacier 1.46 times higher than that estimated from ice surface elevation change, indicating that the global ice storage change estimated from glacier area-volume change probably overestimated. The shrinkage of glacier increased glacier runoff, and led the significant increase of river runoff. The accuracy of projecting the potential glacier change, glacier runoff and river runoff is the key issues of delicacy water resource management in Shule River Basin.
NASA Astrophysics Data System (ADS)
Zoro, Emma-Georgina
The objective of this project is to carry out a comparative analysis of two urban environments with remote sensing and Geographic Informations Systems, integrating multi-source data. The city of Abidjan (Cote d'Ivoire) and Montreal Island (Quebec) were selected. This study lies within the context of the strong demographic and space growths of urban environments. A supervised classification based on the theory of evidence allowed the identification of mixed pixels. However, the accuracy of this method is lower than that of the bayesian theory. Nevertheless, this method showed that the most credible classes (maximum believes in "closed world") are most probable (maximum probabilities) and thus confirms the bayesian maximum-likelihood decision. On the other hand, the contrary is not necessarily true because of the rules of combination. The urban cover map resulting from classification by the maximum likelihood method was then used to determine a relation between the residential surface and the number of inhabitants in a sector. Moreover, the area of green spaces was an input data (environmental component) for the Urban Development Indicator (IDU), the elaborated model for quantifying the quality of life in urban environment. Moreover, this indicator was defined to allow a total and efficient comparison of urban environments. Following a thorough bibliographical review, seven criteria were retained to describe the optimal conditions for the populations well-being. These criteria were then estimated from standardized indices. The choice of these criteria is a function of the availability of the data to be integrated into the GIS. As the criteria selected have not the same importance in the definition of the quality of urban life, one needed to rank by the method of multicriteria hierarchy and to normalize them in order to join them together in only one parameter. The composite indicator IDU thus obtained allowed to establish that Abidjan had an average development in 1995. While Montreal Island had a strong urban development. Moreover, the comparison of the IDUs reveals requirements of health and educational facilities for Abidjan. In addition, from 1989 to 1995, Abidjan developed itself while Montreal Island showed a light decreasing IDU between 1991 and 1996. Theses assertions are confirmed by the studies carried out on these urban communities and validated the relevance of IDU for quantifying and comparing urban development. Such work can be used by decisions makers to establish urban policies for sustainable development.
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay W.; Lyzenga, Gregory A.; Granat, Robert A.; Norton, Charles D.; Rundle, John B.; Pierce, Marlon E.; Fox, Geoffrey C.; McLeod, Dennis; Ludwig, Lisa Grant
2012-01-01
QuakeSim 2.0 improves understanding of earthquake processes by providing modeling tools and integrating model applications and various heterogeneous data sources within a Web services environment. QuakeSim is a multisource, synergistic, data-intensive environment for modeling the behavior of earthquake faults individually, and as part of complex interacting systems. Remotely sensed geodetic data products may be explored, compared with faults and landscape features, mined by pattern analysis applications, and integrated with models and pattern analysis applications in a rich Web-based and visualization environment. Integration of heterogeneous data products with pattern informatics tools enables efficient development of models. Federated database components and visualization tools allow rapid exploration of large datasets, while pattern informatics enables identification of subtle, but important, features in large data sets. QuakeSim is valuable for earthquake investigations and modeling in its current state, and also serves as a prototype and nucleus for broader systems under development. The framework provides access to physics-based simulation tools that model the earthquake cycle and related crustal deformation. Spaceborne GPS and Inter ferometric Synthetic Aperture (InSAR) data provide information on near-term crustal deformation, while paleoseismic geologic data provide longerterm information on earthquake fault processes. These data sources are integrated into QuakeSim's QuakeTables database system, and are accessible by users or various model applications. UAVSAR repeat pass interferometry data products are added to the QuakeTables database, and are available through a browseable map interface or Representational State Transfer (REST) interfaces. Model applications can retrieve data from Quake Tables, or from third-party GPS velocity data services; alternatively, users can manually input parameters into the models. Pattern analysis of GPS and seismicity data has proved useful for mid-term forecasting of earthquakes, and for detecting subtle changes in crustal deformation. The GPS time series analysis has also proved useful as a data-quality tool, enabling the discovery of station anomalies and data processing and distribution errors. Improved visualization tools enable more efficient data exploration and understanding. Tools provide flexibility to science users for exploring data in new ways through download links, but also facilitate standard, intuitive, and routine uses for science users and end users such as emergency responders.
Combining path integration and remembered landmarks when navigating without vision.
Kalia, Amy A; Schrater, Paul R; Legge, Gordon E
2013-01-01
This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.
Combining Path Integration and Remembered Landmarks When Navigating without Vision
Kalia, Amy A.; Schrater, Paul R.; Legge, Gordon E.
2013-01-01
This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information. PMID:24039742
Pre-2014 mudslides at Oso revealed by InSAR and multi-source DEM analysis
NASA Astrophysics Data System (ADS)
Kim, J. W.; Lu, Z.; QU, F.
2014-12-01
The landslide is a process that results in the downward and outward movement of slope-reshaping materials including rocks and soils and annually causes the loss of approximately $3.5 billion and tens of casualties in the United States. The 2014 Oso mudslide was an extreme event costing nearly 40 deaths and damaging civilian properties. Landslides are often unpredictable, but in many cases, catastrophic events are repetitive. Historic record in the Oso mudslide site indicates that there have been serial events in decades, though the extent of sliding events varied from time to time. In our study, the combination of multi-source DEMs, InSAR, and time-series InSAR analysis has enabled to characterize the Oso mudslide. InSAR results from ALOS PALSAR show that there was no significant deformation between mid-2006 and 2011. The combination of time-series InSAR analysis and old-dated DEM indicated revealed topographic changes associated the 2006 sliding event, which is confirmed by the difference of multiple LiDAR DEMs. Precipitation and discharge measurements before the 2006 and 2014 landslide events did not exhibit extremely anomalous records, suggesting the precipitation is not the controlling factor in determining the sliding events at Oso. The lack of surface deformation during 2006-2011 and weak correlation between the precipitation and the sliding event, suggest other factors (such as porosity) might play a critical role on the run-away events at this Oso and other similar landslides.
Lakshminarayana, Indumathy; Wall, David; Bindal, Taruna; Goodyear, Helen M
2015-05-01
Leading a ward round is an essential skill for hospital consultants and senior trainees but is rarely assessed during training. To investigate the key attributes for ward round leadership and to use these results to develop a multisource feedback (MSF) tool to assess the ward round leadership skills of senior specialist trainees. A panel of experts comprising four senior paediatric consultants and two nurse managers were interviewed from May to August 2009. From analysis of the interview transcripts, 10 key themes emerged. A structured questionnaire based on the key themes was designed and sent electronically to paediatric consultants, nurses and trainees at a large university hospital (June-October 2010). 81 consultants, nurses and trainees responded to the survey. The internal consistency of this tool was high (Cronbach's α 0.95). Factor analysis showed that five factors accounted for 72% of variance. The five key areas for ward round leadership were communication skills, preparation and organisation, teaching and enthusiasm, team working and punctuality; communication was the most important key theme. A MSF tool for ward round leadership skills was developed with these areas as five domains. We believe that this tool will add to the current assessment tools available by providing feedback about ward round leadership skills. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Shepherd, Annabel; Lough, Murray
2010-05-01
Although multi-source feedback (MSF) has been used in primary healthcare, the development of an MSF instrument specific to this setting in the UK has not been previously described. The aims of this study were to develop and evaluate an MSF instrument for GPs in Scotland taking part in appraisal. The members of ten primary healthcare teams in the west of Scotland were asked to provide comments in answer to the question, 'What is a good GP?'. The data were reduced and coded by two researchers and questions were devised. Following content validity testing the MSF process was evaluated with volunteers using face-to-face interviews and a postal survey. Thirty-seven statements covering the six domains of communication skills, professional values, clinical care, working with colleagues, personality issues and duties and responsibilities were accepted as relevant by ten primary healthcare teams using a standard of 80 percent agreement. The evaluation found the MSF process to be feasible and acceptable and participants provided some evidence of educational impact. An MSF instrument for GPs has been developed based on the concept of 'the good GP' as described by the primary healthcare team. The evaluation of the resultant MSF process illustrates the potential of MSF, when delivered in the supportive environment of GP appraisal, to provide feedback which has the possibility of improving working relationships between GPs and their colleagues.
NASA Astrophysics Data System (ADS)
Chang, N. B.; Yang, Y. J.; Daranpob, A.
2009-09-01
Recent extreme hydroclimatic events in the United States alone include, but are not limited to, the droughts in Maryland and the Chesapeake Bay area in 2001 through September 2002; Lake Mead in Las Vegas in 2000 through 2004; the Peace River and Lake Okeechobee in South Florida in 2006; and Lake Lanier in Atlanta, Georgia in 2007 that affected the water resources distribution in three states - Alabama, Florida and Georgia. This paper provides evidence from previous work and elaborates on the future perspectives that will collectively employ remote sensing and in-situ observations to support the implementation of the water availability assessment in a metropolitan region. Within the hydrological cycle, precipitation, soil moisture, and evapotranspiration can be monitored by using WSR-88D/NEXRAD data, RADARSAT-1 images, and GEOS images collectively to address the spatiotemporal variations of quantitative availability of waters whereas the MODIS images may be used to track down the qualitative availability of waters in terms of turbidity, Chlorophyll-a and other constitutes of concern. Tampa Bay in Florida was selected as a study site in this analysis, where the water supply infrastructure covers groundwater, desalination plant, and surface water at the same time. Research findings show that through the proper fusion of multi-source and multi-scale remote sensing data for water availability assessment in metropolitan region, a new insight of water infrastructure assessment can be gained to support sustainable planning region wide.
NASA Astrophysics Data System (ADS)
Elder, C.; Xu, X.; Walker, J. C.; Walter Anthony, K. M.; Pohlman, J.; Arp, C. D.; Townsend-Small, A.; Hinkel, K. M.; Czimczik, C. I.
2017-12-01
Lakes in Arctic and Boreal regions are hotspots for atmospheric exchange of the greenhouse gases CO2 and CH4. Thermokarst lakes are a subset of these Northern lakes that may further accelerate climate warming by mobilizing ancient permafrost C (> 11,500 years old) that has been disconnected from the active C cycle for millennia. Northern lakes are thus potentially powerful agents of the permafrost C-climate feedback. While they are critical for projecting the magnitude and timing these feedbacks from the rapidly warming circumpolar region, we lack datasets capturing the diversity of northern lakes, especially regarding their CH4contributions to whole-lake C emissions and their ability to access and mobilize ancient C. We measured the radiocarbon (14C) ages of CH4 and CO2 emitted from 60 understudied lakes and ponds in Arctic and Boreal Alaska during winter and summer to estimate the ages of the C sources yielding these gases. Integrated mean ages for whole-lake emissions were inferred from the 14C-age of dissolved gases sampled beneath seasonal ice. Additionally, we measured concentrations and 14C values of gases emitted by ebullition and diffusion in summer to apportion C emission pathways. Using a multi-sourced mass balance approach, we found that whole-lake CH4 and CO2 emissions were predominantly sourced from relatively young C in most lakes. In Arctic lakes, CH4 originated from 850 14C-year old C on average, whereas dissolved CO2 was sourced from 400 14C-year old C, and represented 99% of total dissolved C flux. Although ancient C had a minimal influence (11% of total emissions), we discovered that lakes in finer-textured aeolian deposits (Yedoma) emitted twice as much ancient C as lakes in sandy regions. In Boreal, yedoma-type lakes, CH4 and CO2 were fueled by significantly older sources, and mass balance results estimated CH4-ebullition to comprise 50-60% of whole-lake CH4 emissions. The mean 14C-age of Boreal emissions was 6,000 14C-years for CH4-C, and 2,400 14C-years for CO2-C. Seasonal differences in dissolved CH4 revealed a clear influence of trapped ebullition dissolving into the water below lake ice in Boreal, but not Arctic lakes. Together, our data demonstrate that regional surficial geology exerts a larger control than climate on C ages and gas emission pathways from lakes.
Three-Axis Attitude Estimation Using Rate-Integrating Gyroscopes
NASA Technical Reports Server (NTRS)
Crassidis, John L.; Markley, F. Landis
2016-01-01
Traditionally, attitude estimation has been performed using a combination of external attitude sensors and internal three-axis gyroscopes. There are many studies of three-axis attitude estimation using gyros that read angular rates. Rate-integrating gyros measure integrated rates or angular displacements, but three-axis attitude estimation using these types of gyros has not been as fully investigated. This paper derives a Kalman filtering framework for attitude estimation using attitude sensors coupled with rate- integrating gyroscopes. In order to account for correlations introduced by using these gyros, the state vector must be augmented, compared with filters using traditional gyros that read angular rates. Two filters are derived in this paper. The first uses an augmented state-vector form that estimates attitude, gyro biases, and gyro angular displacements. The second ignores correlations, leading to a filter that estimates attitude and gyro biases only. Simulation comparisons are shown for both filters. The work presented in this paper focuses only on attitude estimation using rate-integrating gyros, but it can easily be extended to other applications such as inertial navigation, which estimates attitude and position.
NASA Astrophysics Data System (ADS)
Barberá, J. A.; Andreo, B.
2017-04-01
In upland catchments, the hydrology and hydrochemistry of streams are largely influenced by groundwater inflows, at both regional and local scale. However, reverse conditions (groundwater dynamics conditioned by surface water interferences), although less described, may also occur. In this research, the local river-spring connectivity and induced hydrogeochemical interactions in intensely folded, fractured and layered Cretaceous marls and marly-limestones (Fuensanta river valley, S Spain) are discussed based on field observations, tracer tests and hydrodynamic and hydrochemical data. The differential flow measurements and tracing experiments performed in the Fuensanta river permitted us to quantify the surface water losses and to verify its direct hydraulic connection with the Fuensanta spring. The numerical simulations of tracer breakthrough curves suggest the existence of a groundwater flow system through well-connected master and tributary fractures, with fast and multi-source flow components. Furthermore, the multivariate statistical analysis conducted using chemical data from the sampled waters, the geochemical study of water-rock interactions and the proposed water mixing approach allowed the spatial characterization of the chemistry of the springs and river/stream waters draining low permeable Cretaceous formations. Results corroborated that the mixing of surface waters, as well as calcite dissolution and CO2 dissolution/exsolution, are the main geochemical processes constraining Fuensanta spring hydrochemistry. The estimated contribution of the tributary surface waters to the spring flow during the research period was approximately 26-53% (Fuensanta river) and 47-74% (Convento stream), being predominant the first component during high flow and the second one during the dry season. The identification of secondary geochemical processes (dolomite and gypsum dissolution and dedolomitization) in Fuensanta spring waters evidences the induced hydrogeochemical changes resulting from the allogenic recharge. This research highlights the usefulness of an integrated approach based on river and spring flow examination, dye tracing interpretation and regression and multivariate statistical analysis using hydrochemical data for surface water-groundwater interaction assessment in fractured complex environments worldwide, whose implementation becomes critical for an appropriate groundwater policy.
Sudhakar Reddy, C; Vazeed Pasha, S; Jha, C S; Dadhwal, V K
2015-07-01
Conservation of biodiversity has been put to the highest priority throughout the world. The process of identifying threatened ecosystems will search for different drivers related to biodiversity loss. The present study aimed to generate spatial information on deforestation and ecological degradation indicators of fragmentation and forest fires using systematic conceptual approach in Telangana state, India. Identification of ecosystems facing increasing vulnerability can help to safeguard the extinctions of species and useful for conservation planning. The technological advancement of satellite remote sensing and Geographical Information System has increased greatly in assessment and monitoring of ecosystem-level changes. The areas of threat were identified by creating grid cells (5 × 5 km) in Geographical Information System (GIS). Deforestation was assessed using multi-source data of 1930, 1960, 1975, 1985, 1995, 2005 and 2013. The forest cover of 40,746 km(2), 29,299 km(2), 18,652 km(2), 18,368 km(2), 18,006 km(2), 17,556 km(2) and 17,520 km(2) was estimated during 1930, 1960, 1975, 1985, 1995, 2005 and 2013, respectively. Historical evaluation of deforestation revealed that major changes had occurred in forests of Telangana and identified 1095 extinct, 397 critically endangered, 523 endangered and 311 vulnerable ecosystem grid cells. The fragmentation analysis has identified 307 ecosystem grid cells under critically endangered status. Forest burnt area information was extracted using AWiFS data of 2005 to 2014. Spatial analysis indicates total fire-affected forest in Telangana as 58.9% in a decadal period. Conservation status has been recorded depending upon values of threat for each grid, which forms the basis for conservation priority hotspots. Of existing forest, 2.1% grids had severe ecosystem collapse and had been included under the category of conservation priority hotspot-I, followed by 27.2% in conservation priority hotspot-II and 51.5% in conservation priority hotspot-III. This analysis complements assessment of ecosystems undergoing multiple threats. An integrated approach involving the deforestation and degradation indicators is useful in formulating the strategies to take appropriate conservation measures.
NASA Astrophysics Data System (ADS)
Xue, D.; Yu, X.; Jia, S.; Chen, F.; Li, X.
2018-04-01
In this paper, sequence ALOS PALSAR data and airborne SAR data of L-band from June 5, 2008 to September 8, 2015 are used. Based on the research of SAR data preprocessing and core algorithms, such as geocode, registration, filtering, unwrapping and baseline estimation, the improved Goldstein filtering algorithm and the branch-cut path tracking algorithm are used to unwrap the phase. The DEM and surface deformation information of the experimental area were extracted. Combining SAR-specific geometry and differential interferometry, on the basis of composite analysis of multi-source images, a method of detecting landslide disaster combining coherence of SAR image is developed, which makes up for the deficiency of single SAR and optical remote sensing acquisition ability. Especially in bad weather and abnormal climate areas, the speed of disaster emergency and the accuracy of extraction are improved. It is found that the deformation in this area is greatly affected by faults, and there is a tendency of uplift in the southeast plain and western mountainous area, while in the southwest part of the mountain area there is a tendency to sink. This research result provides a basis for decision-making for local disaster prevention and control.
NASA Astrophysics Data System (ADS)
Tan, Chenyan; Fang, Weihua; Li, Jian
2016-04-01
In 2005, Typhoon Damery (200518) caused severe damage to the rubber trees in Hainan Island with its destructive winds and rainfall. Selection of proper vegetation indices using multi-source remote sensing data is critical to the assessment of forest disturbance and damage loss for this event. In this study, we will compare the performance of seven vegetation indices derived from MODIS and Landsat TM imageries prior to and after typhoon Damery, in order to select an optimal index for identifying rubber tree disturbance. The indices to be compared are normalized difference vegetation index (NDVI), Normalized Difference Water Index (NDWI), Normalized Difference Infrared Index (NDII), Enhanced vegetation index (EVI), Leaf area index (LAI), forest z-score (IFZ), and Disturbance Index (DI). The ground truth data of rubber tree damage collected through field investigation was used to verify and compare the results. Our preliminary result for the area with ground-truth data shows that DI has the most significant performance for disturbance detection for this typhoon event. This index DI is then applied to all the areas in Hainan Island hit by Darmey to evaluate the overall forest damage severity. At last, rubber tree damage severity is analyzed with other typhoon hazard factors such as wind, topography, soil and precipitation.
The development of performance-monitoring function in the posterior medial frontal cortex
Fitzgerald, Kate Dimond; Perkins, Suzanne C.; Angstadt, Mike; Johnson, Timothy; Stern, Emily R.; Welsh, Robert C.; Taylor, Stephan F.
2009-01-01
Background Despite its critical role in performance-monitoring, the development of posterior medial prefrontal cortex (pMFC) in goal-directed behaviors remains poorly understood. Performance monitoring depends on distinct, but related functions that may differentially activate the pMFC, such as monitoring response conflict and detecting errors. Developmental differences in conflict- and error-related activations, coupled with age-related changes in behavioral performance, may confound attempts to map the maturation of pMFC functions. To characterize the development of pMFC-based performance monitoring functions, we segregated interference and error-processing, while statistically controlling for performance. Methods Twenty-one adults and 23 youth performed an event-related version of the Multi-Source Interference Task during functional magnetic resonance imaging (fMRI). Linear modeling of interference and error contrast estimates derived from the pMFC were regressed on age, while covarying for performance. Results Interference- and error-processing were associated with robust activation of the pMFC in both youth and adults. Among youth, interference- and error-related activation of the pMFC increased with age, independent of performance. Greater accuracy associated with greater pMFC activity during error commission in both groups. Discussion Increasing pMFC response to interference and errors occurs with age, likely contributing to the improvement of performance monitoring capacity during development. PMID:19913101
Casetta, I; Pugliatti, M; Faggioli, R; Cesnik, E; Simioni, V; Bencivelli, D; De Carlo, L; Granieri, E
2012-02-01
The annual incidence of childhood and adolescence epilepsy ranges from 41 to 97 diagnoses per 100,000 people in western Countries, with a reported decline over time. We aimed at studying the incidence of epilepsy in children and adolescents (1 month to 14 years) and its temporal trend in the province of Ferrara, northern Italy. We implemented a community-based prospective multi-source registry. All children with newly diagnosed epilepsy in the period 1996-2005 were recorded. The incidence rate of newly diagnosed epilepsy in the considered age range was 57 per 100,000 person-years, (95% CI: 49.3-65.9), with a peak in the first year of life (109.4; 95% CI: 69.4-164.1), without differences between the two gender. The estimates were significantly lower than those observed previously (97.3; 95% CI: 81.9-115.7). Incidence rates for epilepsy in the Italian population aged 1 month to 14 years are in line with those of other European and Northern American Countries. The incidence of childhood epilepsy has declined over time in our area. A reduced impact of serious perinatal adverse events could partly explain the decline. © 2011 The Author(s). European Journal of Neurology © 2011 EFNS.
Research on a dem Coregistration Method Based on the SAR Imaging Geometry
NASA Astrophysics Data System (ADS)
Niu, Y.; Zhao, C.; Zhang, J.; Wang, L.; Li, B.; Fan, L.
2018-04-01
Due to the systematic error, especially the horizontal deviation that exists in the multi-source, multi-temporal DEMs (Digital Elevation Models), a method for high precision coregistration is needed. This paper presents a new fast DEM coregistration method based on a given SAR (Synthetic Aperture Radar) imaging geometry to overcome the divergence and time-consuming problem of the conventional DEM coregistration method. First, intensity images are simulated for two DEMs under the given SAR imaging geometry. 2D (Two-dimensional) offsets are estimated in the frequency domain using the intensity cross-correlation operation in the FFT (Fast Fourier Transform) tool, which can greatly accelerate the calculation process. Next, the transformation function between two DEMs is achieved via the robust least-square fitting of 2D polynomial operation. Accordingly, two DEMs can be precisely coregistered. Last, two DEMs, i.e., one high-resolution LiDAR (Light Detection and Ranging) DEM and one low-resolution SRTM (Shutter Radar Topography Mission) DEM, covering the Yangjiao landslide region of Chongqing are taken as an example to test the new method. The results indicate that, in most cases, this new method can achieve not only a result as much as 80 times faster than the minimum elevation difference (Least Z-difference, LZD) DEM registration method, but also more accurate and more reliable results.
Dynamic assessments of population exposure to urban greenspace using multi-source big data.
Song, Yimeng; Huang, Bo; Cai, Jixuan; Chen, Bin
2018-09-01
A growing body of evidence has proven that urban greenspace is beneficial to improve people's physical and mental health. However, knowledge of population exposure to urban greenspace across different spatiotemporal scales remains unclear. Moreover, the majority of existing environmental assessments are unable to quantify how residents enjoy their ambient greenspace during their daily life. To deal with this challenge, we proposed a dynamic method to assess urban greenspace exposure with the integration of mobile-phone locating-request (MPL) data and high-spatial-resolution remote sensing images. This method was further applied to 30 major cities in China by assessing cities' dynamic greenspace exposure levels based on residents' surrounding areas with different buffer scales (0.5km, 1km, and 1.5km). Results showed that regarding residents' 0.5-km surrounding environment, Wenzhou and Hangzhou were found to be with the greenest exposure experience, whereas Zhengzhou and Tangshan were the least ones. The obvious diurnal and daily variations of population exposure to their surrounding greenspace were also identified to be highly correlated with the distribution pattern of urban greenspace and the dynamics of human mobility. Compared with two common measurements of urban greenspace (green coverage rate and green area per capita), the developed method integrated the dynamics of population distribution and geographic locations of urban greenspace into the exposure assessment, thereby presenting a more reasonable way to assess population exposure to urban greenspace. Additionally, this dynamic framework could hold potential utilities in supporting urban planning studies and environmental health studies and advancing our understanding of the magnitude of population exposure to greenspace at different spatiotemporal scales. Copyright © 2018 Elsevier B.V. All rights reserved.
Mer, Mervyn; Snyman, Jacques Rene; van Rensburg, Constance Elizabeth Jansen; van Tonder, Jacob John; Laurens, Ilze
2016-01-01
Clinicians' skepticism, fueled by evidence of inferiority of some multisource generic antimicrobial products, results in the underutilization of more cost-effective generics, especially in critically ill patients. The aim of this observational study was to demonstrate equivalence between the generic or comparator brand of meropenem (Mercide ® ) and the leading innovator brand (Meronem ® ) by means of an ex vivo technique whereby antimicrobial activity is used to estimate plasma concentration of the active moiety. Patients from different high care and intensive care units were recruited for observation when prescribed either of the meropenem brands under investigation. Blood samples were collected over 6 hours after a 30 minute infusion of the different brands. Meropenem concentration curves were established against United States Pharmacopeia standard meropenem (Sigma-Aldrich) by using standard laboratory techniques for culture of Klebsiella pneumoniae. Patients' plasma samples were tested ex vivo, using a disc diffusion assay, to confirm antimicrobial activity and estimate plasma concentrations of the two brands. Both brands of meropenem demonstrated similar curves in donor plasma when concentrations in vials were confirmed. Patient-specific serum concentrations were determined from zones of inhibition against a standard laboratory Klebsiella strain ex vivo, confirming at least similar in vivo concentrations as the concentration curves (90% confidence interval) overlapped; however, the upper limit of the area under the curve for the ratio comparator/innovator exceeded the 1.25-point estimate, i.e., 4% higher for comparator meropenem. This observational, in-practice study demonstrates similar ex vivo activity and in vivo plasma concentration time curves for the products under observation. Assay sensitivity is also confirmed. Current registration status of generic small molecules is in place. The products are therefore clinically interchangeable based on registration status as well as bioassay results, demonstrating sufficient overlap for clinical comfort. The slightly higher observed comparator meropenem concentration (4%) is still clinically acceptable due to the large therapeutic index and should ally fears of inferiority.
NASA Technical Reports Server (NTRS)
Streett, C. L.; Lockard, D. P.; Singer, B. A.; Khorrami, M. R.; Choudhari, M. M.
2003-01-01
The LaRC investigative process for airframe noise has proven to be a useful guide for elucidation of the physics of flow-induced noise generation over the last five years. This process, relying on a close interplay between experiment and computation, is described and demonstrated here on the archetypal problem of flap-edge noise. Some detailed results from both experiment and computation are shown to illustrate the process, and a description of the multi-source physics seen in this problem is conjectured.
Quadriceps oxygenation changes during walking and running on a treadmill
NASA Astrophysics Data System (ADS)
Quaresima, Valentina; Pizzi, Assunta; De Blasi, Roberto A.; Ferrari, Adriano; de Angelis, Marco; Ferrari, Marco
1995-04-01
Vastus lateralis muscle oxygenation was investigated on volunteers as well as muscular dystrophy patients during a walking test, and on volunteers during a free running by a continuous wave near infrared instrument. The data were analyzed using an oxygenation index independent on pathlength changes. Walking did not significantly affect the oxygenation of volunteers and patients. A relative deoxygenation was found only during free running indicating an unbalance between oxygen supply and tissue oxygen extraction. Preliminary measurements of exercising muscle oxygen saturation were performed by a 110 MHz frequency-domain, multisource instrument.
heterogeneous mixture distributions for multi-source extreme rainfall
NASA Astrophysics Data System (ADS)
Ouarda, T.; Shin, J.; Lee, T. S.
2013-12-01
Mixture distributions have been used to model hydro-meteorological variables showing mixture distributional characteristics, e.g. bimodality. Homogeneous mixture (HOM) distributions (e.g. Normal-Normal and Gumbel-Gumbel) have been traditionally applied to hydro-meteorological variables. However, there is no reason to restrict the mixture distribution as the combination of one identical type. It might be beneficial to characterize the statistical behavior of hydro-meteorological variables from the application of heterogeneous mixture (HTM) distributions such as Normal-Gamma. In the present work, we focus on assessing the suitability of HTM distributions for the frequency analysis of hydro-meteorological variables. In the present work, in order to estimate the parameters of HTM distributions, the meta-heuristic algorithm (Genetic Algorithm) is employed to maximize the likelihood function. In the present study, a number of distributions are compared, including the Gamma-Extreme value type-one (EV1) HTM distribution, the EV1-EV1 HOM distribution, and EV1 distribution. The proposed distribution models are applied to the annual maximum precipitation data in South Korea. The Akaike Information Criterion (AIC), the root mean squared errors (RMSE) and the log-likelihood are used as measures of goodness-of-fit of the tested distributions. Results indicate that the HTM distribution (Gamma-EV1) presents the best fitness. The HTM distribution shows significant improvement in the estimation of quantiles corresponding to the 20-year return period. It is shown that extreme rainfall in the coastal region of South Korea presents strong heterogeneous mixture distributional characteristics. Results indicate that HTM distributions are a good alternative for the frequency analysis of hydro-meteorological variables when disparate statistical characteristics are presented.
NASA Astrophysics Data System (ADS)
Kostyuchenko, Yuriy V.; Yuschenko, Maxim; Movchan, Dmytro; Kopachevsky, Ivan
2017-10-01
Problem of remote sensing data harnessing for decision making in conflict territories is considered. Approach for analysis of socio-economic and demographic parameters with a limited set of data and deep uncertainty is described. Number of interlinked techniques to estimate a population and economy in crisis territories are proposed. Stochastic method to assessment of population dynamics using multi-source data using remote sensing data is proposed. Adaptive Markov's chain based method to study of land-use changes using satellite data is proposed. Proposed approach is applied to analysis of socio-economic situation in Donbas (East Ukraine) territory of conflict in 2014-2015. Land-use and landcover patterns for different periods were analyzed using the Landsat and MODIS data . The land-use classification scheme includes the following categories: (1) urban or built-up land, (2) barren land, (3) cropland, (4) horticulture farms, (5) livestock farms, (6) forest, and (7) water. It was demonstrated, that during the period 2014-2015 was not detected drastic changes in land-use structure of study area. Heterogeneously distributed decreasing of horticulture farms (4-6%), livestock farms (5-6%), croplands (3-4%), and increasing of barren land (6-7%) have been observed. Way to analyze land-cover productivity variations using satellite data is proposed. Algorithm is based on analysis of time-series of NDVI and NDWI distributions. Drastic changes of crop area and its productivity were detected. Set of indirect indicators, such as night light intensity, is also considered. Using the approach proposed, using the data utilized, the local and regional GDP, local population, and its dynamics are estimated.
Estimating integrated variance in the presence of microstructure noise using linear regression
NASA Astrophysics Data System (ADS)
Holý, Vladimír
2017-07-01
Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.
NASA Astrophysics Data System (ADS)
Li, Deying; Yin, Kunlong; Gao, Huaxi; Liu, Changchun
2009-10-01
Although the project of the Three Gorges Dam across the Yangtze River in China can utilize this huge potential source of hydroelectric power, and eliminate the loss of life and damage by flood, it also causes environmental problems due to the big rise and fluctuation of the water, such as geo-hazards. In order to prevent and predict geo-hazards, the establishment of prediction system of geo-hazards is very necessary. In order to implement functions of hazard prediction of regional and urban geo-hazard, single geo-hazard prediction, prediction of landslide surge and risk evaluation, logical layers of the system consist of data capturing layer, data manipulation and processing layer, analysis and application layer, and information publication layer. Due to the existence of multi-source spatial data, the research on the multi-source transformation and fusion data should be carried on in the paper. Its applicability of the system was testified on the spatial prediction of landslide hazard through spatial analysis of GIS in which information value method have been applied aims to identify susceptible areas that are possible to future landslide, on the basis of historical record of past landslide, terrain parameter, geology, rainfall and anthropogenic activity. Detailed discussion was carried out on spatial distribution characteristics of landslide hazard in the new town of Badong. These results can be used for risk evaluation. The system can be implemented as an early-warning and emergency management tool by the relevant authorities of the Three Gorges Reservoir in the future.
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information. PMID:25938760
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.
NASA Astrophysics Data System (ADS)
Abdelsalam, A.; El–Nagdy, M. S.; Badawy, B. M.; Osman, W.; Fayed, M.
2016-06-01
The grey particle production following 60 A and 200A GeV 16O interactions with emulsion nuclei is investigated at different centralities. The evaporated target fragment multiplicity is voted as a centrality parameter. The target size effect is examined over a wide range, where the C, N and O nuclei present the light target group while the Br and Ag nuclei are the heavy group. In the framework of the nuclear limiting fragmentation hypothesis, the grey particle multiplicity characteristics depend only on the target size and centrality while the projectile size and energy are not effective. The grey particle is suggested to be a multisource production system. The emission direction in the 4π space depends upon the production source. Either the exponential decay or the Poisson’s peaking curves are the usual characteristic shapes of the grey particle multiplicity distributions. The decay shape is suggested to be a characteristic feature of the source singularity while the peaking shape is a multisource super-position. The sensibility to the centrality varies from a source to other. The distribution shape is identified at each centrality region according to the associated source contribution. In general, the multiplicity characteristics seem to be limited w.r.t. the collision system centrality using light target nuclei. The selection of the black particle multiplicity as a centrality parameter is successful through the collision with the heavy target nuclei. In the collision with the light target nuclei it may be qualitatively better to vote another centrality parameter.
Multisource image fusion method using support value transform.
Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen
2007-07-01
With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.
Cullen, Michael W; Reed, Darcy A; Halvorsen, Andrew J; Wittich, Christopher M; Kreuziger, Lisa M Baumann; Keddis, Mira T; McDonald, Furman S; Beckman, Thomas J
2011-03-01
To determine whether standardized admissions data in residents' Electronic Residency Application Service (ERAS) submissions were associated with multisource assessments of professionalism during internship. ERAS applications for all internal medicine interns (N=191) at Mayo Clinic entering training between July 1, 2005, and July 1, 2008, were reviewed by 6 raters. Extracted data included United States Medical Licensing Examination scores, medicine clerkship grades, class rank, Alpha Omega Alpha membership, advanced degrees, awards, volunteer activities, research experiences, first author publications, career choice, and red flags in performance evaluations. Medical school reputation was quantified using U.S. News & World Report rankings. Strength of comparative statements in recommendation letters (0 = no comparative statement, 1 = equal to peers, 2 = top 20%, 3 = top 10% or "best") were also recorded. Validated multisource professionalism scores (5-point scales) were obtained for each intern. Associations between application variables and professionalism scores were examined using linear regression. The mean ± SD (minimum-maximum) professionalism score was 4.09 ± 0.31 (2.13-4.56). In multivariate analysis, professionalism scores were positively associated with mean strength of comparative statements in recommendation letters (β = 0.13; P = .002). No other associations between ERAS application variables and professionalism scores were found. Comparative statements in recommendation letters for internal medicine residency applicants were associated with professionalism scores during internship. Other variables traditionally examined when selecting residents were not associated with professionalism. These findings suggest that faculty physicians' direct observations, as reflected in letters of recommendation, are useful indicators of what constitutes a best student. Residency selection committees should scrutinize applicants' letters for strongly favorable comparative statements.
A Newton-Raphson Method Approach to Adjusting Multi-Source Solar Simulators
NASA Technical Reports Server (NTRS)
Snyder, David B.; Wolford, David S.
2012-01-01
NASA Glenn Research Center has been using an in house designed X25 based multi-source solar simulator since 2003. The simulator is set up for triple junction solar cells prior to measurements b y adjusting the three sources to produce the correct short circuit current, lsc, in each of three AM0 calibrated sub-cells. The past practice has been to adjust one source on one sub-cell at a time, iterating until all the sub-cells have the calibrated Isc. The new approach is to create a matrix of measured lsc for small source changes on each sub-cell. A matrix, A, is produced. This is normalized to unit changes in the sources so that Ax(delta)s = (delta)isc. This matrix can now be inverted and used with the known Isc differences from the AM0 calibrated values to indicate changes in the source settings, (delta)s = A ·'x.(delta)isc This approach is still an iterative one, but all sources are changed during each iteration step. It typically takes four to six steps to converge on the calibrated lsc values. Even though the source lamps may degrade over time, the initial matrix evaluation i s not performed each time, since measurement matrix needs to be only approximate. Because an iterative approach is used the method will still continue to be valid. This method may become more important as state-of-the-art solar cell junction responses overlap the sources of the simulator. Also, as the number of cell junctions and sources increase, this method should remain applicable.
Lv, Ying; Huang, Guohe; Sun, Wei
2013-01-01
A scenario-based interval two-phase fuzzy programming (SITF) method was developed for water resources planning in a wetland ecosystem. The SITF approach incorporates two-phase fuzzy programming, interval mathematical programming, and scenario analysis within a general framework. It can tackle fuzzy and interval uncertainties in terms of cost coefficients, resources availabilities, water demands, hydrological conditions and other parameters within a multi-source supply and multi-sector consumption context. The SITF method has the advantage in effectively improving the membership degrees of the system objective and all fuzzy constraints, so that both higher satisfactory grade of the objective and more efficient utilization of system resources can be guaranteed. Under the systematic consideration of water demands by the ecosystem, the SITF method was successfully applied to Baiyangdian Lake, which is the largest wetland in North China. Multi-source supplies (including the inter-basin water sources of Yuecheng Reservoir and Yellow River), and multiple water users (including agricultural, industrial and domestic sectors) were taken into account. The results indicated that, the SITF approach would generate useful solutions to identify long-term water allocation and transfer schemes under multiple economic, environmental, ecological, and system-security targets. It can address a comparative analysis for the system satisfactory degrees of decisions under various policy scenarios. Moreover, it is of significance to quantify the relationship between hydrological change and human activities, such that a scheme on ecologically sustainable water supply to Baiyangdian Lake can be achieved. Copyright © 2012 Elsevier B.V. All rights reserved.
Violato, Claudio; Lockyer, Jocelyn M; Fidler, Herta
2008-10-01
Multi-source feedback (MSF) enables performance data to be provided to doctors from patients, co-workers and medical colleagues. This study examined the evidence for the validity of MSF instruments for general practice, investigated changes in performance for doctors who participated twice, 5 years apart, and determined the association between change in performance and initial assessment and socio-demographic characteristics. Data for 250 doctors included three datasets per doctor from, respectively, 25 patients, eight co-workers and eight medical colleagues, collected on two occasions. There was high internal consistency (alpha > 0.90) and adequate generalisability (Ep(2) > 0.70). D study results indicate adequate generalisability coefficients for groups of eight assessors (medical colleagues, co-workers) and 25 patient surveys. Confirmatory factor analyses provided evidence for the validity of factors that were theoretically expected, meaningful and cohesive. Comparative fit indices were 0.91 for medical colleague data, 0.87 for co-worker data and 0.81 for patient data. Paired t-test analysis showed significant change between the two assessments from medical colleagues and co-workers, but not between the two patient surveys. Multiple linear regressions explained 2.1% of the variance at time 2 for medical colleagues, 21.4% of the variance for co-workers and 16.35% of the variance for patient assessments, with professionalism a key variable in all regressions. There is evidence for the construct validity of the instruments and for their stability over time. Upward changes in performance will occur, although their effect size is likely to be small to moderate.
2010-12-01
processes. Novice estimators must often use of these complicated cost estimation tools (e.g., ACEIT , SEER-H, SEER-S, PRICE-H, PRICE-S, etc.) until...However, the thesis will leverage the processes embedded in cost estimation tools such as the Automated Cost Estimating Integration Tool ( ACEIT ) and the
DOT National Transportation Integrated Search
2012-05-01
The Vermont Integrated Land-Use and Transportation Carbon Estimator (VILTCE) project is part of a larger effort to develop environmental metrics related to travel, and to integrate these tools into a travel model under UVM TRC Signature Project No. 1...
[Quality by design approaches for pharmaceutical development and manufacturing of Chinese medicine].
Xu, Bing; Shi, Xin-Yuan; Wu, Zhi-Sheng; Zhang, Yan-Ling; Wang, Yun; Qiao, Yan-Jiang
2017-03-01
The pharmaceutical quality was built by design, formed in the manufacturing process and improved during the product's lifecycle. Based on the comprehensive literature review of pharmaceutical quality by design (QbD), the essential ideas and implementation strategies of pharmaceutical QbD were interpreted. Considering the complex nature of Chinese medicine, the "4H" model was innovated and proposed for implementing QbD in pharmaceutical development and industrial manufacture of Chinese medicine product. "4H" corresponds to the acronym of holistic design, holistic information analysis, holistic quality control, and holistic process optimization, which is consistent with the holistic concept of Chinese medicine theory. The holistic design aims at constructing both the quality problem space from the patient requirement and the quality solution space from multidisciplinary knowledge. Holistic information analysis emphasizes understanding the quality pattern of Chinese medicine by integrating and mining multisource data and information at a relatively high level. The batch-to-batch quality consistence and manufacturing system reliability can be realized by comprehensive application of inspective quality control, statistical quality control, predictive quality control and intelligent quality control strategies. Holistic process optimization is to improve the product quality and process capability during the product lifecycle management. The implementation of QbD is useful to eliminate the ecosystem contradictions lying in the pharmaceutical development and manufacturing process of Chinese medicine product, and helps guarantee the cost effectiveness. Copyright© by the Chinese Pharmaceutical Association.
Castrignanò, Annamaria; Quarto, Ruggiero; Vitti, Carolina; Langella, Giuliano; Terribile, Fabio
2017-01-01
To assess spatial variability at the very fine scale required by Precision Agriculture, different proximal and remote sensors have been used. They provide large amounts and different types of data which need to be combined. An integrated approach, using multivariate geostatistical data-fusion techniques and multi-source geophysical sensor data to determine simple summary scale-dependent indices, is described here. These indices can be used to delineate management zones to be submitted to differential management. Such a data fusion approach with geophysical sensors was applied in a soil of an agronomic field cropped with tomato. The synthetic regionalized factors determined, contributed to split the 3D edaphic environment into two main horizontal structures with different hydraulic properties and to disclose two main horizons in the 0–1.0-m depth with a discontinuity probably occurring between 0.40 m and 0.70 m. Comparing this partition with the soil properties measured with a shallow sampling, it was possible to verify the coherence in the topsoil between the dielectric properties and other properties more directly related to agronomic management. These results confirm the advantages of using proximal sensing as a preliminary step in the application of site-specific management. Combining disparate spatial data (data fusion) is not at all a naive problem and novel and powerful methods need to be developed. PMID:29207510
Castrignanò, Annamaria; Buttafuoco, Gabriele; Quarto, Ruggiero; Vitti, Carolina; Langella, Giuliano; Terribile, Fabio; Venezia, Accursio
2017-12-03
To assess spatial variability at the very fine scale required by Precision Agriculture, different proximal and remote sensors have been used. They provide large amounts and different types of data which need to be combined. An integrated approach, using multivariate geostatistical data-fusion techniques and multi-source geophysical sensor data to determine simple summary scale-dependent indices, is described here. These indices can be used to delineate management zones to be submitted to differential management. Such a data fusion approach with geophysical sensors was applied in a soil of an agronomic field cropped with tomato. The synthetic regionalized factors determined, contributed to split the 3D edaphic environment into two main horizontal structures with different hydraulic properties and to disclose two main horizons in the 0-1.0-m depth with a discontinuity probably occurring between 0.40 m and 0.70 m. Comparing this partition with the soil properties measured with a shallow sampling, it was possible to verify the coherence in the topsoil between the dielectric properties and other properties more directly related to agronomic management. These results confirm the advantages of using proximal sensing as a preliminary step in the application of site-specific management. Combining disparate spatial data (data fusion) is not at all a naive problem and novel and powerful methods need to be developed.
Interactive analysis of geodata based intelligence
NASA Astrophysics Data System (ADS)
Wagner, Boris; Eck, Ralf; Unmüessig, Gabriel; Peinsipp-Byma, Elisabeth
2016-05-01
When a spatiotemporal events happens, multi-source intelligence data is gathered to understand the problem, and strategies for solving the problem are investigated. The difficulties arising from handling spatial and temporal intelligence data represent the main problem. The map might be the bridge to visualize the data and to get the most understand model for all stakeholders. For the analysis of geodata based intelligence data, a software was developed as a working environment that combines geodata with optimized ergonomics. The interaction with the common operational picture (COP) is so essentially facilitated. The composition of the COP is based on geodata services, which are normalized by international standards of the Open Geospatial Consortium (OGC). The basic geodata are combined with intelligence data from images (IMINT) and humans (HUMINT), stored in a NATO Coalition Shared Data Server (CSD). These intelligence data can be combined with further information sources, i.e., live sensors. As a result a COP is generated and an interaction suitable for the specific workspace is added. This allows the users to work interactively with the COP, i.e., searching with an on board CSD client for suitable intelligence data and integrate them into the COP. Furthermore, users can enrich the scenario with findings out of the data of interactive live sensors and add data from other sources. This allows intelligence services to contribute effectively to the process by what military and disaster management are organized.
Literacy skills gaps: A cross-level analysis on international and intergenerational variations
NASA Astrophysics Data System (ADS)
Kim, Suehye
2018-02-01
The global agenda for sustainable development has centred lifelong learning on UNESCO's Education 2030 Framework for Action. The study described in this article aimed to examine international and intergenerational variations in literacy skills gaps within the context of the United Nations Sustainable Development Goals (SDGs). For this purpose, the author examined the trend of literacy gaps in different countries using multilevel and multisource data from the OECD's Programme for the International Assessment of Adult Competencies (PIAAC) and UNESCO Institute for Lifelong Learning survey data from the third edition of the Global Report on Adult Learning and Education (GRALE III). In this article, particular attention is paid to exploring the specific effects of education systems on literacy skills gaps among different age groups. Key findings of this study indicate substantial intergenerational literacy gaps within countries as well as different patterns of literacy gaps across countries. Young generations generally outscore older adults in literacy skills, but feature bigger gaps when examined by gender and social origin. In addition, this study finds an interesting tendency for young generations to benefit from a system of Recognition, Validation and Accreditation (RVA) in closing literacy gaps by formal schooling at country level. This implies the potential of an RVA system for tackling educational inequality in initial schooling. The article concludes with suggestions for integrating literacy skills as a foundation of lifelong learning into national RVA frameworks and mechanisms at system level.
Classification of permafrost active layer depth from remotely sensed and topographic evidence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peddle, D.R.; Franklin, S.E.
1993-04-01
The remote detection of permafrost (perennially frozen ground) has important implications to environmental resource development, engineering studies, natural hazard prediction, and climate change research. In this study, the authors present results from two experiments into the classification of permafrost active layer depth within the zone of discontinuous permafrost in northern Canada. A new software system based on evidential reasoning was implemented to permit the integrated classification of multisource data consisting of landcover, terrain aspect, and equivalent latitude, each of which possessed different formats, data types, or statistical properties that could not be handled by conventional classification algorithms available to thismore » study. In the first experiment, four active layer depth classes were classified using ground based measurements of the three variables with an accuracy of 83% compared to in situ soil probe determination of permafrost active layer depth at over 500 field sites. This confirmed the environmental significance of the variables selected, and provided a baseline result to which a remote sensing classification could be compared. In the second experiment, evidence for each input variable was obtained from image processing of digital SPOT imagery and a photogrammetric digital elevation model, and used to classify active layer depth with an accuracy of 79%. These results suggest the classification of evidence from remotely sensed measures of spectral response and topography may provide suitable indicators of permafrost active layer depth.« less
NASA Astrophysics Data System (ADS)
Xiao, Xin-hong; Xiao, Pei-wei; Dai, Feng; Li, Hai-bo; Zhang, Xue-bin; Zhou, Jia-wen
2018-02-01
The underground powerhouse of the Houziyan Hydropower Station is under the conditions of high geo-stress and a low strength/stress ratio, which leads to significant rock deformation and failures, especially for rock pillars due to bidirectional unloading during the excavation process. Damages occurred in thinner rock pillars after excavation due to unloading and stress concentration, which will reduce the surrounding rock integrity and threaten the safety of the underground powerhouse. By using field investigations and multi-source monitoring data, the deformation and failure characteristics of a rock pillar are analyzed from the tempo-spatial distribution features. These results indicate that significant deformation occurred in the rock pillar when the powerhouse was excavated to the fourth layer, and the maximum displacement reached 107.57 mm, which occurred on the main transformer chamber upstream sidewall at an elevation of 1721.20 m. The rock deformation surrounding the rock pillar is closely related to the excavation process and has significant time-related characteristics. To control large deformation of the rock pillar, thru-anchor cables were used to reinforce the rock pillar to ensure the stability of the powerhouse. The rock deformation surrounding the rock pillar decreases gradually and forms a convergent trend after reinforcement measures are installed based on the analysis of the temporal characteristics and the rock pillar deformation rate.
NASA Astrophysics Data System (ADS)
Beck, Hylke E.; Vergopolan, Noemi; Pan, Ming; Levizzani, Vincenzo; van Dijk, Albert I. J. M.; Weedon, Graham P.; Brocca, Luca; Pappenberger, Florian; Huffman, George J.; Wood, Eric F.
2017-12-01
We undertook a comprehensive evaluation of 22 gridded (quasi-)global (sub-)daily precipitation (P) datasets for the period 2000-2016. Thirteen non-gauge-corrected P datasets were evaluated using daily P gauge observations from 76 086 gauges worldwide. Another nine gauge-corrected datasets were evaluated using hydrological modeling, by calibrating the HBV conceptual model against streamflow records for each of 9053 small to medium-sized ( < 50 000 km2) catchments worldwide, and comparing the resulting performance. Marked differences in spatio-temporal patterns and accuracy were found among the datasets. Among the uncorrected P datasets, the satellite- and reanalysis-based MSWEP-ng V1.2 and V2.0 datasets generally showed the best temporal correlations with the gauge observations, followed by the reanalyses (ERA-Interim, JRA-55, and NCEP-CFSR) and the satellite- and reanalysis-based CHIRP V2.0 dataset, the estimates based primarily on passive microwave remote sensing of rainfall (CMORPH V1.0, GSMaP V5/6, and TMPA 3B42RT V7) or near-surface soil moisture (SM2RAIN-ASCAT), and finally, estimates based primarily on thermal infrared imagery (GridSat V1.0, PERSIANN, and PERSIANN-CCS). Two of the three reanalyses (ERA-Interim and JRA-55) unexpectedly obtained lower trend errors than the satellite datasets. Among the corrected P datasets, the ones directly incorporating daily gauge data (CPC Unified, and MSWEP V1.2 and V2.0) generally provided the best calibration scores, although the good performance of the fully gauge-based CPC Unified is unlikely to translate to sparsely or ungauged regions. Next best results were obtained with P estimates directly incorporating temporally coarser gauge data (CHIRPS V2.0, GPCP-1DD V1.2, TMPA 3B42 V7, and WFDEI-CRU), which in turn outperformed the one indirectly incorporating gauge data through another multi-source dataset (PERSIANN-CDR V1R1). Our results highlight large differences in estimation accuracy, and hence the importance of P dataset selection in both research and operational applications. The good performance of MSWEP emphasizes that careful data merging can exploit the complementary strengths of gauge-, satellite-, and reanalysis-based P estimates.
A multi-source probabilistic hazard assessment of tephra dispersal in the Neapolitan area
NASA Astrophysics Data System (ADS)
Sandri, Laura; Costa, Antonio; Selva, Jacopo; Folch, Arnau; Macedonio, Giovanni; Tonini, Roberto
2015-04-01
In this study we present the results obtained from a long-term Probabilistic Hazard Assessment (PHA) of tephra dispersal in the Neapolitan area. Usual PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping eruption sizes and possible vent positions in a limited number of classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA then results from combining simulations considering different volcanological and meteorological conditions through weights associated to their specific probability of occurrence. However, volcanological parameters (i.e., erupted mass, eruption column height, eruption duration, bulk granulometry, fraction of aggregates) typically encompass a wide range of values. Because of such a natural variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. In the present study, we use a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological input values are chosen by using a stratified sampling method. This procedure allows for quantifying hazard without relying on the definition of scenarios, thus avoiding potential biases introduced by selecting single representative scenarios. Embedding this procedure into the Bayesian Event Tree scheme enables the tephra fall PHA and its epistemic uncertainties. We have appied this scheme to analyze long-term tephra fall PHA from Vesuvius and Campi Flegrei, in a multi-source paradigm. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained show that PHA accounting for the whole natural variability are consistent with previous probabilities maps elaborated for Vesuvius and Campi Flegrei on the basis of single representative scenarios, but show significant differences. In particular, the area characterized by a 300 kg/m2-load exceedance probability larger than 5%, accounting for the whole range of variability (that is, from small violent strombolian to plinian eruptions), is similar to that displayed in the maps based on the medium magnitude reference eruption, but it is of a smaller extent. This is due to the relatively higher weight of the small magnitude eruptions considered in this study, but neglected in the reference scenario maps. On the other hand, in our new maps the area characterized by a 300 kg/m2-load exceedance probability larger than 1% is much larger than that of the medium magnitude reference eruption, due to the contribution of plinian eruptions at lower probabilities, again neglected in the reference scenario maps.
Prouteau, A; Stéfan, A; Wiart, L; Mazaux, J M
2016-02-01
Behavioural changes are the main cause of difficulties in interpersonal relationships and social integration among traumatic brain injury (TBI) patients. The Société française de médecine physique et réadaptation (SOFMER) decided to develop recommendations for the treatment and care provision for these problem under the auspices of the French health authority, the Haute Autorité de la santé (HAS). Assessment of behaviour is essential to describe, understand and define situations, assess any change and suggest lines for intervention. The relationship of these behavioural changes with the brain lesion is likewise of crucial importance in legal and forensic expertise. Using a literature review and expert opinions, the aim was to define the optimal conditions for the collection of data on behavioural changes in individuals having sustained brain trauma, to identify the situations in which they arise, to review the instruments available, and to suggest lines of intervention. A literature search identified 981 articles, among which 122 on the target subject were selected and analysed in detail and confronted with the experience of professionals and user representatives. A first draft of the recommendations was produced by the working group, and then submitted to a review group for opinions and complements. The literature on this subject is heterogeneous, and presents low levels of evidence. No article enabled the development of recommendations above the "expert opinion" level. After prior clarification of the aims of the evaluation, it is recommended first to carefully describe the changes in behaviour, from patient and third-person narratives, and where possible from direct observations. The information enabling the description of the phenomena occurring should be collected by different individuals (multi-source evaluation): the patient, his or her close circle, and professionals with different training backgrounds (multidisciplinary evaluation). The analysis of triggering or associated factors requires an assessment of cognitive functions and any neurological pathology (seizures). After confrontation and synthesis, the information should be completed using one or several behavioural scales, which provide objectivity and reproducibility. The main generic and specific scales are presented, with their advantages, drawbacks and validation references. The group did not wish to recommend any one of them in particular. The evaluation of behavioural changes is essential, since without it a therapeutic strategy and appropriate orientation cannot be implemented. The emphasis should be put on contextualised, multi-source and multidisciplinary evaluation, including validated behavioural scales. In this area, nevertheless, evaluation is still restricted by several methodological limitations. Further research is needed to improve the standardisation of data collection and the psychometric properties of the instruments. A European harmonisation of these procedures is also greatly needed. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Comparison of risk estimates using life-table methods.
Sullivan, R E; Weng, P S
1987-08-01
Risk estimates promulgated by various radiation protection authorities in recent years have become increasingly more complex. Early "integral" estimates in the form of health effects per 0.01 person-Gy (per person-rad) or per 10(4) person-Gy (per 10(6) person-rad) have tended to be replaced by "differential" estimates which are age- and sex-dependent and specify both minimum induction (latency) and duration of risk expression (plateau) periods. These latter types of risk estimate must be used in conjunction with a life table in order to reduce them to integral form. In this paper, the life table has been used to effect a comparison of the organ and tissue risk estimates derived in several recent reports. In addition, a brief review of life-table methodology is presented and some features of the models used in deriving differential coefficients are discussed. While the great number of permutations possible with dose-response models, detailed risk estimates and proposed projection models precludes any unique result, the reduced integral coefficients are required to conform to the linear, absolute-risk model recommended for use with the integral risk estimates reviewed.
Generalized information fusion and visualization using spatial voting and data modeling
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.
2013-05-01
We present a novel and innovative information fusion and visualization framework for multi-source intelligence (multiINT) data using Spatial Voting (SV) and Data Modeling. We describe how different sources of information can be converted into numerical form for further processing downstream, followed by a short description of how this information can be fused using the SV grid. As an illustrative example, we show the modeling of cyberspace as cyber layers for the purpose of tracking cyber personas. Finally we describe a path ahead for creating interactive agile networks through defender customized Cyber-cubes for network configuration and attack visualization.
Olesen, Alexander Neergaard; Christensen, Julie A E; Sorensen, Helge B D; Jennum, Poul J
2016-08-01
Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen's kappa of 0.74 indicating substantial agreement between automatic and manual scoring.
Are proactive personalities always beneficial? Political skill as a moderator.
Sun, Shuhua; van Emmerik, Hetty I J
2015-05-01
Does proactive personality always enhance job success? The authors of this study draw on socioanalytic theory of personality and organizational political perspectives to study employees' political skill in moderating the effects of proactive personality on supervisory ratings of employee task performance, helping behaviors, and learning behaviors. Multisource data from 225 subordinates and their 75 immediate supervisors reveal that proactive personality is associated negatively with supervisory evaluations when political skill is low, and the negative relationship disappears when political skill is high. Implications and future research directions are discussed. (c) 2015 APA, all rights reserved.
Content-based image exploitation for situational awareness
NASA Astrophysics Data System (ADS)
Gains, David
2008-04-01
Image exploitation is of increasing importance to the enterprise of building situational awareness from multi-source data. It involves image acquisition, identification of objects of interest in imagery, storage, search and retrieval of imagery, and the distribution of imagery over possibly bandwidth limited networks. This paper describes an image exploitation application that uses image content alone to detect objects of interest, and that automatically establishes and preserves spatial and temporal relationships between images, cameras and objects. The application features an intuitive user interface that exposes all images and information generated by the system to an operator thus facilitating the formation of situational awareness.
Image fusion based on Bandelet and sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Jiuxing; Zhang, Wei; Li, Xuzhi
2018-04-01
Bandelet transform could acquire geometric regular direction and geometric flow, sparse representation could represent signals with as little as possible atoms on over-complete dictionary, both of which could be used to image fusion. Therefore, a new fusion method is proposed based on Bandelet and Sparse Representation, to fuse Bandelet coefficients of multi-source images and obtain high quality fusion effects. The test are performed on remote sensing images and simulated multi-focus images, experimental results show that the performance of new method is better than tested methods according to objective evaluation indexes and subjective visual effects.
Dragoni, Lisa; Kuenzi, Maribeth
2012-09-01
With a multisource sample comprising 1,150 employees and 230 supervisors, we investigate the effect of leader goal orientation on leader's perceptions of unit performance. We propose that a leader's goal orientation indirectly impacts performance perceptions via the shared achievement goal adopted within the unit (i.e., unit goal orientation). Further, we hypothesize that the presence and impact of unit goal orientation depend on the work unit structure. We find general support for this moderated mediation model, with the strongest evidence being associated with the learning and prove dimensions of goal orientation.
Topics in global convergence of density estimates
NASA Technical Reports Server (NTRS)
Devroye, L.
1982-01-01
The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.
A Hybrid Semi-supervised Classification Scheme for Mining Multisource Geospatial Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju; Bhaduri, Budhendra L
2011-01-01
Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions (class conditional probability densities) are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and ecological zones. A second problem with statistical classifiers is the requirement of large number of accurate training samples (10 to 30 |dimensions|), which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, itmore » is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately there is no convenient multivariate statistical model that can be employed for mulitsource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on real datasets, and our new hybrid approach shows over 25 to 35% improvement in overall classification accuracy over conventional classification schemes.« less
Advancing the capabilities of reservoir remote sensing by leveraging multi-source satellite data
NASA Astrophysics Data System (ADS)
Gao, H.; Zhang, S.; Zhao, G.; Li, Y.
2017-12-01
With a total global capacity of more than 6000 km3, reservoirs play a key role in the hydrological cycle and in water resources management. However, essential reservoir data (e.g., elevation, storage, and evaporation loss) are usually not shared at a large scale. While satellite remote sensing offers a unique opportunity for monitoring large reservoirs from space, the commonly used radar altimeters can only detect storage variations of about 15% of global lakes at a repeat period of 10 days or longer. To advance the capabilities of reservoir sensing, we developed a series of algorithms geared towards generating long term reservoir records at improved spatial coverage, and at improved temporal resolution. To this goal, observations are leveraged from multiple satellite sensors, which include radar/laser altimeters, imagers, and passive microwave radiometers. In South Asia, we demonstrate that reservoir storage can be estimated under all-weather conditions at a 4 day time step, with the total capacity of monitored reservoirs increased to 45%. Within the Continuous United States, a first Landsat based evaporation loss dataset was developed (containing 204 reservoirs) from 1984 to 2011. The evaporation trends of these reservoirs are identified and the causes are analyzed. All of these algorithms and products were validated with gauge observations. Future satellite missions, which will make significant contributions to monitoring global reservoirs, are also discussed.
NASA Astrophysics Data System (ADS)
Zhang, Jixian; Zhengjun, Liu; Xiaoxia, Sun
2009-12-01
The eco-environment in the Three Gorges Reservoir Area (TGRA) in China has received much attention due to the construction of the Three Gorges Hydropower Station. Land use/land cover changes (LUCC) are a major cause of ecological environmental changes. In this paper, the spatial landscape dynamics from 1978 to 2005 in this area are monitored and recent changes are analyzed, using the Landsat TM (MSS) images of 1978, 1988, 1995, 2000 and 2005. Vegetation cover fractions for a vegetation cover analysis are retrieved from MODIS/Terra imagery from 2000 to 2006, being the period before and after the rising water level of the reservoir. Several analytical indices have been used to analyze spatial and temporal changes. Results indicate that cropland, woodland, and grassland areas reduced continuously over the past 30 years, while river and built-up area increased by 2.79% and 4.45% from 2000 to 2005, respectively. The built-up area increased at the cost of decreased cropland, woodland and grassland. The vegetation cover fraction increased slightly. We conclude that significant changes in land use/land cover have occurred in the Three Gorges Reservoir Area. The main cause is a continuous economic and urban/rural development, followed by environmental management policies after construction of the Three Gorges Dam.
[Surveillance of work-related suicide in France: An exploratory study].
Bossard, C; Santin, G; Lopez, V; Imbernon, E; Cohidon, C
2016-06-01
Despite a large media coverage of the phenomenon, the number of work-related suicides is currently unknown in France. There are nevertheless some data available to document this important issue. The aim of this study was to explore the feasibility of an epidemiological surveillance system for work-related suicides designed to quantify and describe work-related suicides mainly according to economic sectors and occupational categories. Existing data sources in France were identified and evaluated for their relevance and their potential use in a multi-sources surveillance system. A regional pilot study was performed using the main relevant sources identified to investigate different aspects of the system design. Four major data sources were identified to be used to describe work-related suicides: death certificates, social insurance funds, data collected by the officers of the labor inspectorate and data collected from autopsy reports in forensic departments. The regional pilot study gave an estimate of 28 cases of work-related suicide in two years. The findings point out the difficulties involved and the criteria for successful implement of such a system. The study provides some solutions for carrying out this system, the achievement of which will depend upon particular resources and partners' agreements. Recommendations for the next steps have been made based on this work, including possible collaboration with forensic departments, which collect essential data for surveillance. Copyright © 2016. Published by Elsevier Masson SAS.
Marker-Based Multi-Sensor Fusion Indoor Localization System for Micro Air Vehicles.
Xing, Boyang; Zhu, Quanmin; Pan, Feng; Feng, Xiaoxue
2018-05-25
A novel multi-sensor fusion indoor localization algorithm based on ArUco marker is designed in this paper. The proposed ArUco mapping algorithm can build and correct the map of markers online with Grubbs criterion and K-mean clustering, which avoids the map distortion due to lack of correction. Based on the conception of multi-sensor information fusion, the federated Kalman filter is utilized to synthesize the multi-source information from markers, optical flow, ultrasonic and the inertial sensor, which can obtain a continuous localization result and effectively reduce the position drift due to the long-term loss of markers in pure marker localization. The proposed algorithm can be easily implemented in a hardware of one Raspberry Pi Zero and two STM32 micro controllers produced by STMicroelectronics (Geneva, Switzerland). Thus, a small-size and low-cost marker-based localization system is presented. The experimental results show that the speed estimation result of the proposed system is better than Px4flow, and it has the centimeter accuracy of mapping and positioning. The presented system not only gives satisfying localization precision, but also has the potential to expand other sensors (such as visual odometry, ultra wideband (UWB) beacon and lidar) to further improve the localization performance. The proposed system can be reliably employed in Micro Aerial Vehicle (MAV) visual localization and robotics control.
Meng, Wenjun; Zhong, Qirui; Yun, Xiao; Zhu, Xi; Huang, Tianbo; Shen, Huizhong; Chen, Yilin; Chen, Han; Zhou, Feng; Liu, Junfeng; Wang, Xinming; Zeng, Eddy Y; Tao, Shu
2017-03-07
There is increasing evidence indicating the critical role of ammonia (NH 3 ) in the formation of secondary aerosols. Therefore, high quality NH 3 emission inventory is important for modeling particulate matter in the atmosphere. Unfortunately, without directly measured emission factors (EFs) in developing countries, using data from developed countries could result in an underestimation of these emissions. A series of newly reported EFs for China provide an opportunity to update the NH 3 emission inventory. In addition, a recently released fuel consumption data product has allowed for a multisource, high-resolution inventory to be assembled. In this study, an improved global NH 3 emission inventory for combustion and industrial sources with high sectorial (70 sources), spatial (0.1° × 0.1°), and temporal (monthly) resolutions was compiled for the years 1960 to 2013. The estimated emissions from transportation (1.59 Tg) sectors in 2010 was 2.2 times higher than those of previous reports. The spatial variation of the emissions was associated with population, gross domestic production, and temperature. Unlike other major air pollutants, NH 3 emissions continue to increase, even in developed countries, which is likely caused by an increased use of biomass fuel in the residential sector. The emissions density of NH 3 in urban areas is an order of magnitude higher than in rural areas.
A Methodology for Developing Army Acquisition Strategies for an Uncertain Future
2007-01-01
manuscript for publication. Acronyms ABP Assumption-Based Planning ACEIT Automated Cost Estimating Integrated Tool ACR Armored Cavalry Regiment ACTD...decisions. For example, they employ the Automated Cost Estimating Integrated Tools ( ACEIT ) to simplify life cycle cost estimates; other tools are
NASA Astrophysics Data System (ADS)
Camporese, Matteo; Botto, Anna
2017-04-01
Data assimilation is becoming increasingly popular in hydrological and earth system modeling, as it allows us to integrate multisource observation data in modeling predictions and, in doing so, to reduce uncertainty. For this reason, data assimilation has been recently the focus of much attention also for physically-based integrated hydrological models, whereby multiple terrestrial compartments (e.g., snow cover, surface water, groundwater) are solved simultaneously, in an attempt to tackle environmental problems in a holistic approach. Recent examples include the joint assimilation of water table, soil moisture, and river discharge measurements in catchment models of coupled surface-subsurface flow using the ensemble Kalman filter (EnKF). One of the typical assumptions in these studies is that the measurement errors are uncorrelated, whereas in certain situations it is reasonable to believe that some degree of correlation occurs, due for example to the fact that a pair of sensors share the same soil type. The goal of this study is to show if and how the measurement error correlations between different observation data play a significant role on assimilation results in a real-world application of an integrated hydrological model. The model CATHY (CATchment HYdrology) is applied to reproduce the hydrological dynamics observed in an experimental hillslope. The physical model, located in the Department of Civil, Environmental and Architectural Engineering of the University of Padova (Italy), consists of a reinforced concrete box containing a soil prism with maximum height of 3.5 m, length of 6 m, and width of 2 m. The hillslope is equipped with sensors to monitor the pressure head and soil moisture responses to a series of generated rainfall events applied onto a 60 cm thick sand layer overlying a sandy clay soil. The measurement network is completed by two tipping bucket flow gages to measure the two components (subsurface and surface) of the outflow. By collecting data at a temporal resolution of 0.5 Hz (relatively high, compared to the hydrological dynamics), we can perform a comprehensive statistical analysis of the observations, including the cross-correlations between data from different sensors. We report on the impact of taking these correlations into account in a series of assimilation scenarios, where the EnKF is used to assimilate pressure head and/or soil moisture and/or subsurface outflow.
NASA Astrophysics Data System (ADS)
Bhanumurthy, V.; Venugopala Rao, K.; Srinivasa Rao, S.; Ram Mohan Rao, K.; Chandra, P. Satya; Vidhyasagar, J.; Diwakar, P. G.; Dadhwal, V. K.
2014-11-01
Geographical Information Science (GIS) is now graduated from traditional desktop system to Internet system. Internet GIS is emerging as one of the most promising technologies for addressing Emergency Management. Web services with different privileges are playing an important role in dissemination of the emergency services to the decision makers. Spatial database is one of the most important components in the successful implementation of Emergency Management. It contains spatial data in the form of raster, vector, linked with non-spatial information. Comprehensive data is required to handle emergency situation in different phases. These database elements comprise core data, hazard specific data, corresponding attribute data, and live data coming from the remote locations. Core data sets are minimum required data including base, thematic, infrastructure layers to handle disasters. Disaster specific information is required to handle a particular disaster situation like flood, cyclone, forest fire, earth quake, land slide, drought. In addition to this Emergency Management require many types of data with spatial and temporal attributes that should be made available to the key players in the right format at right time. The vector database needs to be complemented with required resolution satellite imagery for visualisation and analysis in disaster management. Therefore, the database is interconnected and comprehensive to meet the requirement of an Emergency Management. This kind of integrated, comprehensive and structured database with appropriate information is required to obtain right information at right time for the right people. However, building spatial database for Emergency Management is a challenging task because of the key issues such as availability of data, sharing policies, compatible geospatial standards, data interoperability etc. Therefore, to facilitate using, sharing, and integrating the spatial data, there is a need to define standards to build emergency database systems. These include aspects such as i) data integration procedures namely standard coding scheme, schema, meta data format, spatial format ii) database organisation mechanism covering data management, catalogues, data models iii) database dissemination through a suitable environment, as a standard service for effective service dissemination. National Database for Emergency Management (NDEM) is such a comprehensive database for addressing disasters in India at the national level. This paper explains standards for integrating, organising the multi-scale and multi-source data with effective emergency response using customized user interfaces for NDEM. It presents standard procedure for building comprehensive emergency information systems for enabling emergency specific functions through geospatial technologies.
Wycisk, Peter; Stollberg, Reiner; Neumann, Christian; Gossel, Wolfgang; Weiss, Holger; Weber, Roland
2013-04-01
A large-scale groundwater contamination characterises the Pleistocene groundwater system of the former industrial and abandoned mining region Bitterfeld/Wolfen, Eastern Germany. For more than a century, local chemical production and extensive lignite mining caused a complex contaminant release from local production areas and related dump sites. Today, organic pollutants (mainly organochlorines) are present in all compartments of the environment at high concentration levels. An integrated methodology for characterising the current situation of pollution as well as the future fate development of hazardous substances is highly required to decide on further management and remediation strategies. Data analyses have been performed on regional groundwater monitoring data from about 10 years, containing approximately 3,500 samples, and up to 180 individual organic parameters from almost 250 observation wells. Run-off measurements as well as water samples were taken biweekly from local creeks during a period of 18 months. A kriging interpolation procedure was applied on groundwater analytics to generate continuous distribution patterns of the nodal contaminant samples. High-resolution geological 3-D modelling serves as a database for a regional 3-D groundwater flow model. Simulation results support the future fate assessment of contaminants. A first conceptual model of the contamination has been developed to characterise the contamination in regional surface waters and groundwater. A reliable explanation of the variant hexachlorocyclohexane (HCH) occurrence within the two local aquifer systems has been derived from the regionalised distribution patterns. Simulation results from groundwater flow modelling provide a better understanding of the future pollutant migration paths and support the overall site characterisation. The presented case study indicates that an integrated assessment of large-scale groundwater contaminations often needs more data than only from local groundwater monitoring. The developed methodology is appropriate to assess POP-contaminated mega-sites including, e.g. HCH deposits. Although HCH isomers are relevant groundwater pollutants at this site, further organochlorine pollutants are present at considerably higher levels. The study demonstrates that an effective evaluation of the current situation of contamination as well as of the related future fate development requires detailed information of the entire observed system.
Optimal distribution of integration time for intensity measurements in Stokes polarimetry.
Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng
2015-10-19
We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.
Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie
2016-04-04
We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.
NASA Astrophysics Data System (ADS)
Pleniou, Magdalini; Koutsias, Nikos
2013-05-01
The aim of our study was to explore the spectral properties of fire-scorched (burned) and non fire-scorched (vegetation) areas, as well as areas with different burn/vegetation ratios, using a multisource multiresolution satellite data set. A case study was undertaken following a very destructive wildfire that occurred in Parnitha, Greece, July 2007, for which we acquired satellite images from LANDSAT, ASTER, and IKONOS. Additionally, we created spatially degraded satellite data over a range of coarser resolutions using resampling techniques. The panchromatic (1 m) and multispectral component (4 m) of IKONOS were merged using the Gram-Schmidt spectral sharpening method. This very high-resolution imagery served as the basis to estimate the cover percentage of burned areas, bare land and vegetation at pixel level, by applying the maximum likelihood classification algorithm. Finally, multiple linear regression models were fit to estimate each land-cover fraction as a function of surface reflectance values of the original and the spatially degraded satellite images. The main findings of our research were: (a) the Near Infrared (NIR) and Short-wave Infrared (SWIR) are the most important channels to estimate the percentage of burned area, whereas the NIR and red channels are the most important to estimate the percentage of vegetation in fire-affected areas; (b) when the bi-spectral space consists only of NIR and SWIR, then the NIR ground reflectance value plays a more significant role in estimating the percent of burned areas, and the SWIR appears to be more important in estimating the percent of vegetation; and (c) semi-burned areas comprising 45-55% burned area and 45-55% vegetation are spectrally closer to burned areas in the NIR channel, whereas those areas are spectrally closer to vegetation in the SWIR channel. These findings, at least partially, are attributed to the fact that: (i) completely burned pixels present low variance in the NIR and high variance in the SWIR, whereas the opposite is observed in completely vegetated areas where higher variance is observed in the NIR and lower variance in the SWIR, and (ii) bare land modifies the spectral signal of burned areas more than the spectral signal of vegetated areas in the NIR, while the opposite is observed in SWIR region of the spectrum where the bare land modifies the spectral signal of vegetation more than the burned areas because the bare land and the vegetation are spectrally more similar in the NIR, and the bare land and burned areas are spectrally more similar in the SWIR.
Global 3-D ionospheric electron density reanalysis based on multisource data assimilation
NASA Astrophysics Data System (ADS)
Yue, Xinan; Schreiner, William S.; Kuo, Ying-Hwa; Hunt, Douglas C.; Wang, Wenbin; Solomon, Stanley C.; Burns, Alan G.; Bilitza, Dieter; Liu, Jann-Yenq; Wan, Weixing; Wickert, Jens
2012-09-01
We report preliminary results of a global 3-D ionospheric electron density reanalysis demonstration study during 2002-2011 based on multisource data assimilation. The monthly global ionospheric electron density reanalysis has been done by assimilating the quiet days ionospheric data into a data assimilation model constructed using the International Reference Ionosphere (IRI) 2007 model and a Kalman filter technique. These data include global navigation satellite system (GNSS) observations of ionospheric total electron content (TEC) from ground-based stations, ionospheric radio occultations by CHAMP, GRACE, COSMIC, SAC-C, Metop-A, and the TerraSAR-X satellites, and Jason-1 and 2 altimeter TEC measurements. The output of the reanalysis are 3-D gridded ionospheric electron densities with temporal and spatial resolutions of 1 h in universal time, 5° in latitude, 10° in longitude, and ˜30 km in altitude. The climatological features of the reanalysis results, such as solar activity dependence, seasonal variations, and the global morphology of the ionosphere, agree well with those in the empirical models and observations. The global electron content derived from the international GNSS service global ionospheric maps, the observed electron density profiles from the Poker Flat Incoherent Scatter Radar during 2007-2010, and foF2 observed by the global ionosonde network during 2002-2011 are used to validate the reanalysis method. All comparisons show that the reanalysis have smaller deviations and biases than the IRI-2007 predictions. Especially after April 2006 when the six COSMIC satellites were launched, the reanalysis shows significant improvement over the IRI predictions. The obvious overestimation of the low-latitude ionospheric F region densities by the IRI model during the 23/24 solar minimum is corrected well by the reanalysis. The potential application and improvements of the reanalysis are also discussed.
NASA Astrophysics Data System (ADS)
Tuia, Devis; Marcos, Diego; Camps-Valls, Gustau
2016-10-01
Remote sensing image classification exploiting multiple sensors is a very challenging problem: data from different modalities are affected by spectral distortions and mis-alignments of all kinds, and this hampers re-using models built for one image to be used successfully in other scenes. In order to adapt and transfer models across image acquisitions, one must be able to cope with datasets that are not co-registered, acquired under different illumination and atmospheric conditions, by different sensors, and with scarce ground references. Traditionally, methods based on histogram matching have been used. However, they fail when densities have very different shapes or when there is no corresponding band to be matched between the images. An alternative builds upon manifold alignment. Manifold alignment performs a multidimensional relative normalization of the data prior to product generation that can cope with data of different dimensionality (e.g. different number of bands) and possibly unpaired examples. Aligning data distributions is an appealing strategy, since it allows to provide data spaces that are more similar to each other, regardless of the subsequent use of the transformed data. In this paper, we study a methodology that aligns data from different domains in a nonlinear way through kernelization. We introduce the Kernel Manifold Alignment (KEMA) method, which provides a flexible and discriminative projection map, exploits only a few labeled samples (or semantic ties) in each domain, and reduces to solving a generalized eigenvalue problem. We successfully test KEMA in multi-temporal and multi-source very high resolution classification tasks, as well as on the task of making a model invariant to shadowing for hyperspectral imaging.