Sample records for based selection schemes

  1. Genetic progress in multistage dairy cattle breeding schemes using genetic markers.

    PubMed

    Schrooten, C; Bovenhuis, H; van Arendonk, J A M; Bijma, P

    2005-04-01

    The aim of this paper was to explore general characteristics of multistage breeding schemes and to evaluate multistage dairy cattle breeding schemes that use information on quantitative trait loci (QTL). Evaluation was either for additional genetic response or for reduction in number of progeny-tested bulls while maintaining the same response. The reduction in response in multistage breeding schemes relative to comparable single-stage breeding schemes (i.e., with the same overall selection intensity and the same amount of information in the final stage of selection) depended on the overall selection intensity, the selection intensity in the various stages of the breeding scheme, and the ratio of the accuracies of selection in the various stages of the breeding scheme. When overall selection intensity was constant, reduction in response increased with increasing selection intensity in the first stage. The decrease in response was highest in schemes with lower overall selection intensity. Reduction in response was limited in schemes with low to average emphasis on first-stage selection, especially if the accuracy of selection in the first stage was relatively high compared with the accuracy in the final stage. Closed nucleus breeding schemes in dairy cattle that use information on QTL were evaluated by deterministic simulation. In the base scheme, the selection index consisted of pedigree information and own performance (dams), or pedigree information and performance of 100 daughters (sires). In alternative breeding schemes, information on a QTL was accounted for by simulating an additional index trait. The fraction of the variance explained by the QTL determined the correlation between the additional index trait and the breeding goal trait. Response in progeny test schemes relative to a base breeding scheme without QTL information ranged from +4.5% (QTL explaining 5% of the additive genetic variance) to +21.2% (QTL explaining 50% of the additive genetic variance). A QTL explaining 5% of the additive genetic variance allowed a 35% reduction in the number of progeny tested bulls, while maintaining genetic response at the level of the base scheme. Genetic progress was up to 31.3% higher for schemes with increased embryo production and selection of embryos based on QTL information. The challenge for breeding organizations is to find the optimum breeding program with regard to additional genetic progress and additional (or reduced) cost.

  2. Genetic and economic evaluation of Japanese Black (Wagyu) cattle breeding schemes.

    PubMed

    Kahi, A K; Hirooka, H

    2005-09-01

    Deterministic simulation was used to evaluate 10 breeding schemes for genetic gain and profitability and in the context of maximizing returns from investment in Japanese Black cattle breeding. A breeding objective that integrated the cow-calf and feedlot segments was considered. Ten breeding schemes that differed in the records available for use as selection criteria were defined. The schemes ranged from one that used carcass traits currently available to Japanese Black cattle breeders (Scheme 1) to one that also included linear measurements and male and female reproduction traits (Scheme 10). The latter scheme represented the highest level of performance recording. In all breeding schemes, sires were chosen from the proportion selected during the first selection stage (performance testing), modeling a two-stage selection process. The effect on genetic gain and profitability of varying test capacity and number of progeny per sire and of ultrasound scanning of live animals was examined for all breeding schemes. Breeding schemes that selected young bulls during performance testing based on additional individual traits and information on carcass traits from their relatives generated additional genetic gain and profitability. Increasing test capacity resulted in an increase in genetic gain in all schemes. Profitability was optimal in Scheme 2 (a scheme similar to Scheme 1, but selection of young bulls also was based on information on carcass traits from their relatives) to 10 when 900 to 1,000 places were available for performance testing. Similarly, as the number of progeny used in the selection of sires increased, genetic gain first increased sharply and then gradually in all schemes. Profit was optimal across all breeding schemes when sires were selected based on information from 150 to 200 progeny. Additional genetic gain and profitability were generated in each breeding scheme with ultrasound scanning of live animals for carcass traits. Ultrasound scanning of live animals was more important than the addition of any other traits in the selection criteria. These results may be used to provide guidance to Japanese Black cattle breeders.

  3. Train axle bearing fault detection using a feature selection scheme based multi-scale morphological filter

    NASA Astrophysics Data System (ADS)

    Li, Yifan; Liang, Xihui; Lin, Jianhui; Chen, Yuejian; Liu, Jianxin

    2018-02-01

    This paper presents a novel signal processing scheme, feature selection based multi-scale morphological filter (MMF), for train axle bearing fault detection. In this scheme, more than 30 feature indicators of vibration signals are calculated for axle bearings with different conditions and the features which can reflect fault characteristics more effectively and representatively are selected using the max-relevance and min-redundancy principle. Then, a filtering scale selection approach for MMF based on feature selection and grey relational analysis is proposed. The feature selection based MMF method is tested on diagnosis of artificially created damages of rolling bearings of railway trains. Experimental results show that the proposed method has a superior performance in extracting fault features of defective train axle bearings. In addition, comparisons are performed with the kurtosis criterion based MMF and the spectral kurtosis criterion based MMF. The proposed feature selection based MMF method outperforms these two methods in detection of train axle bearing faults.

  4. An Energy-Efficient Game-Theory-Based Spectrum Decision Scheme for Cognitive Radio Sensor Networks

    PubMed Central

    Salim, Shelly; Moh, Sangman

    2016-01-01

    A cognitive radio sensor network (CRSN) is a wireless sensor network in which sensor nodes are equipped with cognitive radio. In this paper, we propose an energy-efficient game-theory-based spectrum decision (EGSD) scheme for CRSNs to prolong the network lifetime. Note that energy efficiency is the most important design consideration in CRSNs because it determines the network lifetime. The central part of the EGSD scheme consists of two spectrum selection algorithms: random selection and game-theory-based selection. The EGSD scheme also includes a clustering algorithm, spectrum characterization with a Markov chain, and cluster member coordination. Our performance study shows that EGSD outperforms the existing popular framework in terms of network lifetime and coordination overhead. PMID:27376290

  5. An Energy-Efficient Game-Theory-Based Spectrum Decision Scheme for Cognitive Radio Sensor Networks.

    PubMed

    Salim, Shelly; Moh, Sangman

    2016-06-30

    A cognitive radio sensor network (CRSN) is a wireless sensor network in which sensor nodes are equipped with cognitive radio. In this paper, we propose an energy-efficient game-theory-based spectrum decision (EGSD) scheme for CRSNs to prolong the network lifetime. Note that energy efficiency is the most important design consideration in CRSNs because it determines the network lifetime. The central part of the EGSD scheme consists of two spectrum selection algorithms: random selection and game-theory-based selection. The EGSD scheme also includes a clustering algorithm, spectrum characterization with a Markov chain, and cluster member coordination. Our performance study shows that EGSD outperforms the existing popular framework in terms of network lifetime and coordination overhead.

  6. Developing a new case based computer-aided detection scheme and an adaptive cueing method to improve performance in detecting mammographic lesions

    PubMed Central

    Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Zheng, Bin

    2017-01-01

    The purpose of this study is to evaluate a new method to improve performance of computer-aided detection (CAD) schemes of screening mammograms with two approaches. In the first approach, we developed a new case based CAD scheme using a set of optimally selected global mammographic density, texture, spiculation, and structural similarity features computed from all four full-field digital mammography (FFDM) images of the craniocaudal (CC) and mediolateral oblique (MLO) views by using a modified fast and accurate sequential floating forward selection feature selection algorithm. Selected features were then applied to a “scoring fusion” artificial neural network (ANN) classification scheme to produce a final case based risk score. In the second approach, we combined the case based risk score with the conventional lesion based scores of a conventional lesion based CAD scheme using a new adaptive cueing method that is integrated with the case based risk scores. We evaluated our methods using a ten-fold cross-validation scheme on 924 cases (476 cancer and 448 recalled or negative), whereby each case had all four images from the CC and MLO views. The area under the receiver operating characteristic curve was AUC = 0.793±0.015 and the odds ratio monotonically increased from 1 to 37.21 as CAD-generated case based detection scores increased. Using the new adaptive cueing method, the region based and case based sensitivities of the conventional CAD scheme at a false positive rate of 0.71 per image increased by 2.4% and 0.8%, respectively. The study demonstrated that supplementary information can be derived by computing global mammographic density image features to improve CAD-cueing performance on the suspicious mammographic lesions. PMID:27997380

  7. A genetic fuzzy analytical hierarchy process based projection pursuit method for selecting schemes of water transportation projects

    NASA Astrophysics Data System (ADS)

    Jin, Juliang; Li, Lei; Wang, Wensheng; Zhang, Ming

    2006-10-01

    The optimal selection of schemes of water transportation projects is a process of choosing a relatively optimal scheme from a number of schemes of water transportation programming and management projects, which is of importance in both theory and practice in water resource systems engineering. In order to achieve consistency and eliminate the dimensions of fuzzy qualitative and fuzzy quantitative evaluation indexes, to determine the weights of the indexes objectively, and to increase the differences among the comprehensive evaluation index values of water transportation project schemes, a projection pursuit method, named FPRM-PP for short, was developed in this work for selecting the optimal water transportation project scheme based on the fuzzy preference relation matrix. The research results show that FPRM-PP is intuitive and practical, the correction range of the fuzzy preference relation matrix A it produces is relatively small, and the result obtained is both stable and accurate; therefore FPRM-PP can be widely used in the optimal selection of different multi-factor decision-making schemes.

  8. Protection of autonomous microgrids using agent-based distributed communication

    DOE PAGES

    Cintuglu, Mehmet H.; Ma, Tan; Mohammed, Osama A.

    2016-04-06

    This study presents a real-time implementation of autonomous microgrid protection using agent-based distributed communication. Protection of an autonomous microgrid requires special considerations compared to large scale distribution net-works due to the presence of power converters and relatively low inertia. In this work, we introduce a practical overcurrent and a frequency selectivity method to overcome conventional limitations. The proposed overcurrent scheme defines a selectivity mechanism considering the remedial action scheme (RAS) of the microgrid after a fault instant based on feeder characteristics and the location of the intelligent electronic devices (IEDs). A synchrophasor-based online frequency selectivity approach is proposed to avoidmore » pulse loading effects in low inertia microgrids. Experimental results are presented for verification of the pro-posed schemes using a laboratory based microgrid. The setup was composed of actual generation units and IEDs using IEC 61850 protocol. The experimental results were in excellent agreement with the proposed protection scheme.« less

  9. Protection of autonomous microgrids using agent-based distributed communication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cintuglu, Mehmet H.; Ma, Tan; Mohammed, Osama A.

    This study presents a real-time implementation of autonomous microgrid protection using agent-based distributed communication. Protection of an autonomous microgrid requires special considerations compared to large scale distribution net-works due to the presence of power converters and relatively low inertia. In this work, we introduce a practical overcurrent and a frequency selectivity method to overcome conventional limitations. The proposed overcurrent scheme defines a selectivity mechanism considering the remedial action scheme (RAS) of the microgrid after a fault instant based on feeder characteristics and the location of the intelligent electronic devices (IEDs). A synchrophasor-based online frequency selectivity approach is proposed to avoidmore » pulse loading effects in low inertia microgrids. Experimental results are presented for verification of the pro-posed schemes using a laboratory based microgrid. The setup was composed of actual generation units and IEDs using IEC 61850 protocol. The experimental results were in excellent agreement with the proposed protection scheme.« less

  10. Validation of a selective ensemble-based classification scheme for myoelectric control using a three-dimensional Fitts' Law test.

    PubMed

    Scheme, Erik J; Englehart, Kevin B

    2013-07-01

    When controlling a powered upper limb prosthesis it is important not only to know how to move the device, but also when not to move. A novel approach to pattern recognition control, using a selective multiclass one-versus-one classification scheme has been shown to be capable of rejecting unintended motions. This method was shown to outperform other popular classification schemes when presented with muscle contractions that did not correspond to desired actions. In this work, a 3-D Fitts' Law test is proposed as a suitable alternative to using virtual limb environments for evaluating real-time myoelectric control performance. The test is used to compare the selective approach to a state-of-the-art linear discriminant analysis classification based scheme. The framework is shown to obey Fitts' Law for both control schemes, producing linear regression fittings with high coefficients of determination (R(2) > 0.936). Additional performance metrics focused on quality of control are discussed and incorporated in the evaluation. Using this framework the selective classification based scheme is shown to produce significantly higher efficiency and completion rates, and significantly lower overshoot and stopping distances, with no significant difference in throughput.

  11. Distributed polar-coded OFDM based on Plotkin's construction for half duplex wireless communication

    NASA Astrophysics Data System (ADS)

    Umar, Rahim; Yang, Fengfan; Mughal, Shoaib; Xu, HongJun

    2018-07-01

    A Plotkin-based polar-coded orthogonal frequency division multiplexing (P-PC-OFDM) scheme is proposed and its bit error rate (BER) performance over additive white gaussian noise (AWGN), frequency selective Rayleigh, Rician and Nakagami-m fading channels has been evaluated. The considered Plotkin's construction possesses a parallel split in its structure, which motivated us to extend the proposed P-PC-OFDM scheme in a coded cooperative scenario. As the relay's effective collaboration has always been pivotal in the design of cooperative communication therefore, an efficient selection criterion for choosing the information bits has been inculcated at the relay node. To assess the BER performance of the proposed cooperative scheme, we have also upgraded conventional polar-coded cooperative scheme in the context of OFDM as an appropriate bench marker. The Monte Carlo simulated results revealed that the proposed Plotkin-based polar-coded cooperative OFDM scheme convincingly outperforms the conventional polar-coded cooperative OFDM scheme by 0.5 0.6 dBs over AWGN channel. This prominent gain in BER performance is made possible due to the bit-selection criteria and the joint successive cancellation decoding adopted at the relay and the destination nodes, respectively. Furthermore, the proposed coded cooperative schemes outperform their corresponding non-cooperative schemes by a gain of 1 dB under an identical condition.

  12. A Selective Group Authentication Scheme for IoT-Based Medical Information System.

    PubMed

    Park, YoHan; Park, YoungHo

    2017-04-01

    The technology of IoT combined with medical systems is expected to support advanced medical services. However, unsolved security problems, such as misuse of medical devices, illegal access to the medical server and so on, make IoT-based medical systems not be applied widely. In addition, users have a high burden of computation to access Things for the explosive growth of IoT devices. Because medical information is critical and important, but users have a restricted computing power, IoT-based medical systems are required to provide secure and efficient authentication for users. In this paper, we propose a selective group authentication scheme using Shamir's threshold technique. The property of selectivity gives the right of choice to users to form a group which consists of things users select and access. And users can get an access authority for those Things at a time. Thus, our scheme provides an efficient user authentication for multiple Things and conditional access authority for safe IoT-based medical information system. To the best of our knowledge, our proposed scheme is the first in which selectivity is combined with group authentication in IoT environments.

  13. Two-stage atlas subset selection in multi-atlas based image segmentation.

    PubMed

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.

  14. Activity Detection and Retrieval for Image and Video Data with Limited Training

    DTIC Science & Technology

    2015-06-10

    applications. Here we propose two techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a... automata . For our second approach to segmentation, we employ a region based segmentation technique that is capable of handling intensity inhomogeneity...techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a mixture of Gaussian is fitted to the

  15. PHACK: An Efficient Scheme for Selective Forwarding Attack Detection in WSNs.

    PubMed

    Liu, Anfeng; Dong, Mianxiong; Ota, Kaoru; Long, Jun

    2015-12-09

    In this paper, a Per-Hop Acknowledgement (PHACK)-based scheme is proposed for each packet transmission to detect selective forwarding attacks. In our scheme, the sink and each node along the forwarding path generate an acknowledgement (ACK) message for each received packet to confirm the normal packet transmission. The scheme, in which each ACK is returned to the source node along a different routing path, can significantly increase the resilience against attacks because it prevents an attacker from compromising nodes in the return routing path, which can otherwise interrupt the return of nodes' ACK packets. For this case, the PHACK scheme also has better potential to detect abnormal packet loss and identify suspect nodes as well as better resilience against attacks. Another pivotal issue is the network lifetime of the PHACK scheme, as it generates more acknowledgements than previous ACK-based schemes. We demonstrate that the network lifetime of the PHACK scheme is not lower than that of other ACK-based schemes because the scheme just increases the energy consumption in non-hotspot areas and does not increase the energy consumption in hotspot areas. Moreover, the PHACK scheme greatly simplifies the protocol and is easy to implement. Both theoretical and simulation results are given to demonstrate the effectiveness of the proposed scheme in terms of high detection probability and the ability to identify suspect nodes.

  16. PHACK: An Efficient Scheme for Selective Forwarding Attack Detection in WSNs

    PubMed Central

    Liu, Anfeng; Dong, Mianxiong; Ota, Kaoru; Long, Jun

    2015-01-01

    In this paper, a Per-Hop Acknowledgement (PHACK)-based scheme is proposed for each packet transmission to detect selective forwarding attacks. In our scheme, the sink and each node along the forwarding path generate an acknowledgement (ACK) message for each received packet to confirm the normal packet transmission. The scheme, in which each ACK is returned to the source node along a different routing path, can significantly increase the resilience against attacks because it prevents an attacker from compromising nodes in the return routing path, which can otherwise interrupt the return of nodes’ ACK packets. For this case, the PHACK scheme also has better potential to detect abnormal packet loss and identify suspect nodes as well as better resilience against attacks. Another pivotal issue is the network lifetime of the PHACK scheme, as it generates more acknowledgements than previous ACK-based schemes. We demonstrate that the network lifetime of the PHACK scheme is not lower than that of other ACK-based schemes because the scheme just increases the energy consumption in non-hotspot areas and does not increase the energy consumption in hotspot areas. Moreover, the PHACK scheme greatly simplifies the protocol and is easy to implement. Both theoretical and simulation results are given to demonstrate the effectiveness of the proposed scheme in terms of high detection probability and the ability to identify suspect nodes. PMID:26690178

  17. Comparison of two SVD-based color image compression schemes.

    PubMed

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

  18. Comparison of two SVD-based color image compression schemes

    PubMed Central

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451

  19. TripSense: A Trust-Based Vehicular Platoon Crowdsensing Scheme with Privacy Preservation in VANETs

    PubMed Central

    Hu, Hao; Lu, Rongxing; Huang, Cheng; Zhang, Zonghua

    2016-01-01

    In this paper, we propose a trust-based vehicular platoon crowdsensing scheme, named TripSense, in VANET. The proposed TripSense scheme introduces a trust-based system to evaluate vehicles’ sensing abilities and then selects the more capable vehicles in order to improve sensing results accuracy. In addition, the sensing tasks are accomplished by platoon member vehicles and preprocessed by platoon head vehicles before the data are uploaded to server. Hence, it is less time-consuming and more efficient compared with the way where the data are submitted by individual platoon member vehicles. Hence it is more suitable in ephemeral networks like VANET. Moreover, our proposed TripSense scheme integrates unlinkable pseudo-ID techniques to achieve PM vehicle identity privacy, and employs a privacy-preserving sensing vehicle selection scheme without involving the PM vehicle’s trust score to keep its location privacy. Detailed security analysis shows that our proposed TripSense scheme not only achieves desirable privacy requirements but also resists against attacks launched by adversaries. In addition, extensive simulations are conducted to show the correctness and effectiveness of our proposed scheme. PMID:27258287

  20. Improved numerical methods for turbulent viscous flows aerothermal modeling program, phase 2

    NASA Technical Reports Server (NTRS)

    Karki, K. C.; Patankar, S. V.; Runchal, A. K.; Mongia, H. C.

    1988-01-01

    The details of a study to develop accurate and efficient numerical schemes to predict complex flows are described. In this program, several discretization schemes were evaluated using simple test cases. This assessment led to the selection of three schemes for an in-depth evaluation based on two-dimensional flows. The scheme with the superior overall performance was incorporated in a computer program for three-dimensional flows. To improve the computational efficiency, the selected discretization scheme was combined with a direct solution approach in which the fluid flow equations are solved simultaneously rather than sequentially.

  1. Design and evaluation of nonverbal sound-based input for those with motor handicapped.

    PubMed

    Punyabukkana, Proadpran; Chanjaradwichai, Supadaech; Suchato, Atiwong

    2013-03-01

    Most personal computing interfaces rely on the users' ability to use their hand and arm movements to interact with on-screen graphical widgets via mainstream devices, including keyboards and mice. Without proper assistive devices, this style of input poses difficulties for motor-handicapped users. We propose a sound-based input scheme enabling users to operate Windows' Graphical User Interface by producing hums and fricatives through regular microphones. Hierarchically arranged menus are utilized so that only minimal numbers of different actions are required at a time. The proposed scheme was found to be accurate and capable of responding promptly compared to other sound-based schemes. Being able to select from multiple item-selecting modes helps reducing the average time duration needed for completing tasks in the test scenarios almost by half the time needed when the tasks were performed solely through cursor movements. Still, improvements on facilitating users to select the most appropriate modes for desired tasks should improve the overall usability of the proposed scheme.

  2. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information),more » four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the range of 1.985–2.156 mm and 1.966–2.234 mm, for NLP and ILD affected regions, respectively, excluding schemes with statistically significant lower performance (Wilcoxon signed-ranks test, p < 0.05), resulting in 13 finally selected registration schemes. Conclusions: Selected registration schemes in case of ILD CT follow-up analysis indicate the significance of adaptive stochastic gradient descent optimizer, as well as the importance of combined rigid and nonrigid schemes providing high accuracy and time efficiency. The selected optimal deformable registration schemes are equivalent in terms of their accuracy and thus compatible in terms of their clinical outcome.« less

  3. Two-stage atlas subset selection in multi-atlas based image segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stagemore » atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.« less

  4. Assessment strategies for municipal selective waste collection schemes.

    PubMed

    Ferreira, Fátima; Avelino, Catarina; Bentes, Isabel; Matos, Cristina; Teixeira, Carlos Afonso

    2017-01-01

    An important strategy to promote a strong sustainable growth relies on an efficient municipal waste management, and phasing out waste landfilling through waste prevention and recycling emerges as a major target. For this purpose, effective collection schemes are required, in particular those regarding selective waste collection, pursuing a more efficient and high quality recycling of reusable materials. This paper addresses the assessment and benchmarking of selective collection schemes, relevant to guide future operational improvements. In particular, the assessment is based on the monitoring and statistical analysis of a core-set of performance indicators that highlights collection trends, complemented with a performance index that gathers a weighted linear combination of these indicators. This combined analysis underlines a potential tool to support decision makers involved in the process of selecting the collection scheme with best overall performance. The presented approach was applied to a case study conducted in Oporto Municipality, with data gathered from two distinct selective collection schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Robust watermarking scheme for binary images using a slice-based large-cluster algorithm with a Hamming Code

    NASA Astrophysics Data System (ADS)

    Chen, Wen-Yuan; Liu, Chen-Chung

    2006-01-01

    The problems with binary watermarking schemes are that they have only a small amount of embeddable space and are not robust enough. We develop a slice-based large-cluster algorithm (SBLCA) to construct a robust watermarking scheme for binary images. In SBLCA, a small-amount cluster selection (SACS) strategy is used to search for a feasible slice in a large-cluster flappable-pixel decision (LCFPD) method, which is used to search for the best location for concealing a secret bit from a selected slice. This method has four major advantages over the others: (a) SBLCA has a simple and effective decision function to select appropriate concealment locations, (b) SBLCA utilizes a blind watermarking scheme without the original image in the watermark extracting process, (c) SBLCA uses slice-based shuffling capability to transfer the regular image into a hash state without remembering the state before shuffling, and finally, (d) SBLCA has enough embeddable space that every 64 pixels could accommodate a secret bit of the binary image. Furthermore, empirical results on test images reveal that our approach is a robust watermarking scheme for binary images.

  6. Comparison of wavelet based denoising schemes for gear condition monitoring: An Artificial Neural Network based Approach

    NASA Astrophysics Data System (ADS)

    Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva

    2018-02-01

    Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.

  7. Applying a deep learning based CAD scheme to segment and quantify visceral and subcutaneous fat areas from CT images

    NASA Astrophysics Data System (ADS)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; Moore, Kathleen; Liu, Hong; Zheng, Bin

    2017-03-01

    Abdominal obesity is strongly associated with a number of diseases and accurately assessment of subtypes of adipose tissue volume plays a significant role in predicting disease risk, diagnosis and prognosis. The objective of this study is to develop and evaluate a new computer-aided detection (CAD) scheme based on deep learning models to automatically segment subcutaneous fat areas (SFA) and visceral (VFA) fat areas depicting on CT images. A dataset involving CT images from 40 patients were retrospectively collected and equally divided into two independent groups (i.e. training and testing group). The new CAD scheme consisted of two sequential convolutional neural networks (CNNs) namely, Selection-CNN and Segmentation-CNN. Selection-CNN was trained using 2,240 CT slices to automatically select CT slices belonging to abdomen areas and SegmentationCNN was trained using 84,000 fat-pixel patches to classify fat-pixels as belonging to SFA or VFA. Then, data from the testing group was used to evaluate the performance of the optimized CAD scheme. Comparing to manually labelled results, the classification accuracy of CT slices selection generated by Selection-CNN yielded 95.8%, while the accuracy of fat pixel segmentation using Segmentation-CNN yielded 96.8%. Therefore, this study demonstrated the feasibility of using deep learning based CAD scheme to recognize human abdominal section from CT scans and segment SFA and VFA from CT slices with high agreement compared with subjective segmentation results.

  8. Time-Delayed Two-Step Selective Laser Photodamage of Dye-Biomolecule Complexes

    NASA Astrophysics Data System (ADS)

    Andreoni, A.; Cubeddu, R.; de Silvestri, S.; Laporta, P.; Svelto, O.

    1980-08-01

    A scheme is proposed for laser-selective photodamage of biological molecules, based on time-delayed two-step photoionization of a dye molecule bound to the biomolecule. The validity of the scheme is experimentally demonstrated in the case of the dye Proflavine, bound to synthetic polynucleotides.

  9. An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks.

    PubMed

    Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing

    2017-03-20

    In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods.

  10. An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks

    PubMed Central

    Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing

    2017-01-01

    In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods. PMID:28335537

  11. Quantum-secret-sharing scheme based on local distinguishability of orthogonal multiqudit entangled states

    NASA Astrophysics Data System (ADS)

    Wang, Jingtao; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-02-01

    In this study, we propose the concept of judgment space to investigate the quantum-secret-sharing scheme based on local distinguishability (called LOCC-QSS). Because of the proposing of this conception, the property of orthogonal mutiqudit entangled states under restricted local operation and classical communication (LOCC) can be described more clearly. According to these properties, we reveal that, in the previous (k ,n )-threshold LOCC-QSS scheme, there are two required conditions for the selected quantum states to resist the unambiguous attack: (i) their k -level judgment spaces are orthogonal, and (ii) their (k -1 )-level judgment spaces are equal. Practically, if k

  12. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    2015-06-15

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less

  13. A New Scrambling Evaluation Scheme Based on Spatial Distribution Entropy and Centroid Difference of Bit-Plane

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Adhikari, Avishek; Sakurai, Kouichi

    Watermarking is one of the most effective techniques for copyright protection and information hiding. It can be applied in many fields of our society. Nowadays, some image scrambling schemes are used as one part of the watermarking algorithm to enhance the security. Therefore, how to select an image scrambling scheme and what kind of the image scrambling scheme may be used for watermarking are the key problems. Evaluation method of the image scrambling schemes can be seen as a useful test tool for showing the property or flaw of the image scrambling method. In this paper, a new scrambling evaluation system based on spatial distribution entropy and centroid difference of bit-plane is presented to obtain the scrambling degree of image scrambling schemes. Our scheme is illustrated and justified through computer simulations. The experimental results show (in Figs. 6 and 7) that for the general gray-scale image, the evaluation degree of the corresponding cipher image for the first 4 significant bit-planes selection is nearly the same as that for the 8 bit-planes selection. That is why, instead of taking 8 bit-planes of a gray-scale image, it is sufficient to take only the first 4 significant bit-planes for the experiment to find the scrambling degree. This 50% reduction in the computational cost makes our scheme efficient.

  14. THE WESTERN LAKE SUPERIOR COMPARATIVE WATERSHED FRAMEWORK: A FIELD TEST OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED GEOGRAPHICALLY-INDEPENDENT CLASSIFICATION

    EPA Science Inventory

    Stratified random selection of watersheds allowed us to compare geographically-independent classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme within the Northern Lakes a...

  15. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes.

    PubMed

    Brock, Guy N; Shaffer, John R; Blakesley, Richard E; Lotz, Meredith J; Tseng, George C

    2008-01-10

    Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures x time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. We found that the optimal imputation algorithms (LSA, LLS, and BPCA) are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS) scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS) scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA) are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA) performed better on mcroarray data with lower complexity, while neighbour-based methods (KNN, OLS, LSA, LLS) performed better in data with higher complexity. We also found that the EBS and STS schemes serve as complementary and effective tools for selecting the optimal imputation algorithm.

  16. The GIS map coloring support decision-making system based on case-based reasoning and simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Deng, Shuang; Xiang, Wenting; Tian, Yangge

    2009-10-01

    Map coloring is a hard task even to the experienced map experts. In the GIS project, usually need to color map according to the customer, which make the work more complex. With the development of GIS, more and more programmers join the project team, which lack the training of cartology, their coloring map are harder to meet the requirements of customer. From the experience, customers with similar background usually have similar tastes for coloring map. So, we developed a GIS color scheme decision-making system which can select color schemes of similar customers from case base for customers to select and adjust. The system is a BS/CS mixed system, the client side use JSP and make it possible for the system developers to go on remote calling of the colors scheme cases in the database server and communicate with customers. Different with general case-based reasoning, even the customers are very similar, their selection may have difference, it is hard to provide a "best" option. So, we select the Simulated Annealing Algorithm (SAA) to arrange the emergence order of different color schemes. Customers can also dynamically adjust certain features colors based on existing case. The result shows that the system can facilitate the communication between the designers and the customers and improve the quality and efficiency of coloring map.

  17. Mammogram classification scheme using 2D-discrete wavelet and local binary pattern for detection of breast cancer

    NASA Astrophysics Data System (ADS)

    Adi Putra, Januar

    2018-04-01

    In this paper, we propose a new mammogram classification scheme to classify the breast tissues as normal or abnormal. Feature matrix is generated using Local Binary Pattern to all the detailed coefficients from 2D-DWT of the region of interest (ROI) of a mammogram. Feature selection is done by selecting the relevant features that affect the classification. Feature selection is used to reduce the dimensionality of data and features that are not relevant, in this paper the F-test and Ttest will be performed to the results of the feature extraction dataset to reduce and select the relevant feature. The best features are used in a Neural Network classifier for classification. In this research we use MIAS and DDSM database. In addition to the suggested scheme, the competent schemes are also simulated for comparative analysis. It is observed that the proposed scheme has a better say with respect to accuracy, specificity and sensitivity. Based on experiments, the performance of the proposed scheme can produce high accuracy that is 92.71%, while the lowest accuracy obtained is 77.08%.

  18. Comparative research of redundant strap down inertial navigation system based on different configuration schemes

    NASA Astrophysics Data System (ADS)

    Yu, Yuting; Cheng, Ming

    2018-05-01

    Aiming at various configuration scheme and inertial measurement units of Strapdown Inertial Navigation System, selected tetrahedron skew configuration and coaxial orthogonal configuration by nine low cost IMU to build system. Calculation and simulation the performance index, reliability and fault diagnosis ability of the navigation system. Analysis shows that the reliability and reconfiguration scheme of skew configuration is superior to the orthogonal configuration scheme, while the performance index and fault diagnosis ability of the system are similar. The work in this paper provides a strong reference for the selection of engineering applications.

  19. Modeling and performance analysis of an improved movement-based location management scheme for packet-switched mobile communication systems.

    PubMed

    Chung, Yun Won; Kwon, Jae Kyun; Park, Suwon

    2014-01-01

    One of the key technologies to support mobility of mobile station (MS) in mobile communication systems is location management which consists of location update and paging. In this paper, an improved movement-based location management scheme with two movement thresholds is proposed, considering bursty data traffic characteristics of packet-switched (PS) services. The analytical modeling for location update and paging signaling loads of the proposed scheme is developed thoroughly and the performance of the proposed scheme is compared with that of the conventional scheme. We show that the proposed scheme outperforms the conventional scheme in terms of total signaling load with an appropriate selection of movement thresholds.

  20. Trajectory data privacy protection based on differential privacy mechanism

    NASA Astrophysics Data System (ADS)

    Gu, Ke; Yang, Lihao; Liu, Yongzhi; Liao, Niandong

    2018-05-01

    In this paper, we propose a trajectory data privacy protection scheme based on differential privacy mechanism. In the proposed scheme, the algorithm first selects the protected points from the user’s trajectory data; secondly, the algorithm forms the polygon according to the protected points and the adjacent and high frequent accessed points that are selected from the accessing point database, then the algorithm calculates the polygon centroids; finally, the noises are added to the polygon centroids by the differential privacy method, and the polygon centroids replace the protected points, and then the algorithm constructs and issues the new trajectory data. The experiments show that the running time of the proposed algorithms is fast, the privacy protection of the scheme is effective and the data usability of the scheme is higher.

  1. Error reduction program: A progress report

    NASA Technical Reports Server (NTRS)

    Syed, S. A.

    1984-01-01

    Five finite differences schemes were evaluated for minimum numerical diffusion in an effort to identify and incorporate the best error reduction scheme into a 3D combustor performance code. Based on this evaluated, two finite volume method schemes were selected for further study. Both the quadratic upstream differencing scheme (QUDS) and the bounded skew upstream differencing scheme two (BSUDS2) were coded into a two dimensional computer code and their accuracy and stability determined by running several test cases. It was found that BSUDS2 was more stable than QUDS. It was also found that the accuracy of both schemes is dependent on the angle that the streamline make with the mesh with QUDS being more accurate at smaller angles and BSUDS2 more accurate at larger angles. The BSUDS2 scheme was selected for extension into three dimensions.

  2. A Cu-Zn nanoparticle promoter for selective carbon dioxide reduction and its application in visible-light-active Z-scheme systems using water as an electron donor.

    PubMed

    Yin, Ge; Sako, Hiroshi; Gubbala, Ramesh V; Ueda, Shigenori; Yamaguchi, Akira; Abe, Hideki; Miyauchi, Masahiro

    2018-04-17

    Selective carbon dioxide photoreduction to produce formic acid was achieved under visible light irradiation using water molecules as electron donors, similar to natural plants, based on the construction of a Z-scheme light harvesting system modified with a Cu-Zn alloy nanoparticle co-catalyst. The faradaic efficiency of our Z-scheme system for HCOOH generation was over 50% under visible light irradiation.

  3. Drinking Water Quality Criterion - Based site Selection of Aquifer Storage and Recovery Scheme in Chou-Shui River Alluvial Fan

    NASA Astrophysics Data System (ADS)

    Huang, H. E.; Liang, C. P.; Jang, C. S.; Chen, J. S.

    2015-12-01

    Land subsidence due to groundwater exploitation is an urgent environmental problem in Choushui river alluvial fan in Taiwan. Aquifer storage and recovery (ASR), where excess surface water is injected into subsurface aquifers for later recovery, is one promising strategy for managing surplus water and may overcome water shortages. The performance of an ASR scheme is generally evaluated in terms of recovery efficiency, which is defined as percentage of water injected in to a system in an ASR site that fulfills the targeted water quality criterion. Site selection of an ASR scheme typically faces great challenges, due to the spatial variability of groundwater quality and hydrogeological condition. This study proposes a novel method for the ASR site selection based on drinking quality criterion. Simplified groundwater flow and contaminant transport model spatial distributions of the recovery efficiency with the help of the groundwater quality, hydrological condition, ASR operation. The results of this study may provide government administrator for establishing reliable ASR scheme.

  4. Searchable attribute-based encryption scheme with attribute revocation in cloud storage.

    PubMed

    Wang, Shangping; Zhao, Duqiao; Zhang, Yaling

    2017-01-01

    Attribute based encryption (ABE) is a good way to achieve flexible and secure access control to data, and attribute revocation is the extension of the attribute-based encryption, and the keyword search is an indispensable part for cloud storage. The combination of both has an important application in the cloud storage. In this paper, we construct a searchable attribute-based encryption scheme with attribute revocation in cloud storage, the keyword search in our scheme is attribute based with access control, when the search succeeds, the cloud server returns the corresponding cipher text to user and the user can decrypt the cipher text definitely. Besides, our scheme supports multiple keywords search, which makes the scheme more practical. Under the assumption of decisional bilinear Diffie-Hellman exponent (q-BDHE) and decisional Diffie-Hellman (DDH) in the selective security model, we prove that our scheme is secure.

  5. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage

    PubMed Central

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model. PMID:27898703

  6. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage.

    PubMed

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model.

  7. Implementation analysis of RC5 algorithm on Preneel-Govaerts-Vandewalle (PGV) hashing schemes using length extension attack

    NASA Astrophysics Data System (ADS)

    Siswantyo, Sepha; Susanti, Bety Hayat

    2016-02-01

    Preneel-Govaerts-Vandewalle (PGV) schemes consist of 64 possible single-block-length schemes that can be used to build a hash function based on block ciphers. For those 64 schemes, Preneel claimed that 4 schemes are secure. In this paper, we apply length extension attack on those 4 secure PGV schemes which use RC5 algorithm in its basic construction to test their collision resistance property. The attack result shows that the collision occurred on those 4 secure PGV schemes. Based on the analysis, we indicate that Feistel structure and data dependent rotation operation in RC5 algorithm, XOR operations on the scheme, along with selection of additional message block value also give impact on the collision to occur.

  8. Calibrating SALT: a sampling scheme to improve estimates of suspended sediment yield

    Treesearch

    Robert B. Thomas

    1986-01-01

    Abstract - SALT (Selection At List Time) is a variable probability sampling scheme that provides unbiased estimates of suspended sediment yield and its variance. SALT performs better than standard schemes which are estimate variance. Sampling probabilities are based on a sediment rating function which promotes greater sampling intensity during periods of high...

  9. A Selective Encryption Algorithm Based on AES for Medical Information.

    PubMed

    Oh, Ju-Young; Yang, Dong-Il; Chon, Ki-Hwan

    2010-03-01

    The transmission of medical information is currently a daily routine. Medical information needs efficient, robust and secure encryption modes, but cryptography is primarily a computationally intensive process. Towards this direction, we design a selective encryption scheme for critical data transmission. We expand the advandced encrytion stanard (AES)-Rijndael with five criteria: the first is the compression of plain data, the second is the variable size of the block, the third is the selectable round, the fourth is the optimization of software implementation and the fifth is the selective function of the whole routine. We have tested our selective encryption scheme by C(++) and it was compiled with Code::Blocks using a MinGW GCC compiler. The experimental results showed that our selective encryption scheme achieves a faster execution speed of encryption/decryption. In future work, we intend to use resource optimization to enhance the round operations, such as SubByte/InvSubByte, by exploiting similarities between encryption and decryption. As encryption schemes become more widely used, the concept of hardware and software co-design is also a growing new area of interest.

  10. A Selective Encryption Algorithm Based on AES for Medical Information

    PubMed Central

    Oh, Ju-Young; Chon, Ki-Hwan

    2010-01-01

    Objectives The transmission of medical information is currently a daily routine. Medical information needs efficient, robust and secure encryption modes, but cryptography is primarily a computationally intensive process. Towards this direction, we design a selective encryption scheme for critical data transmission. Methods We expand the advandced encrytion stanard (AES)-Rijndael with five criteria: the first is the compression of plain data, the second is the variable size of the block, the third is the selectable round, the fourth is the optimization of software implementation and the fifth is the selective function of the whole routine. We have tested our selective encryption scheme by C++ and it was compiled with Code::Blocks using a MinGW GCC compiler. Results The experimental results showed that our selective encryption scheme achieves a faster execution speed of encryption/decryption. In future work, we intend to use resource optimization to enhance the round operations, such as SubByte/InvSubByte, by exploiting similarities between encryption and decryption. Conclusions As encryption schemes become more widely used, the concept of hardware and software co-design is also a growing new area of interest. PMID:21818420

  11. Multi-Hierarchical Gray Correlation Analysis Applied in the Selection of Green Building Design Scheme

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Chuanghong

    2018-02-01

    As a sustainable form of ecological structure, green building is widespread concerned and advocated in society increasingly nowadays. In the survey and design phase of preliminary project construction, carrying out the evaluation and selection of green building design scheme, which is in accordance with the scientific and reasonable evaluation index system, can improve the ecological benefits of green building projects largely and effectively. Based on the new Green Building Evaluation Standard which came into effect on January 1, 2015, the evaluation index system of green building design scheme is constructed taking into account the evaluation contents related to the green building design scheme. We organized experts who are experienced in construction scheme optimization to mark and determine the weight of each evaluation index through the AHP method. The correlation degree was calculated between each evaluation scheme and ideal scheme by using multilevel gray relational analysis model and then the optimal scheme was determined. The feasibility and practicability of the evaluation method are verified by introducing examples.

  12. A Lossless Multichannel Bio-Signal Compression Based on Low-Complexity Joint Coding Scheme for Portable Medical Devices

    PubMed Central

    Kim, Dong-Sun; Kwon, Jin-San

    2014-01-01

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900

  13. Analysis and design of digital output interface devices for gas turbine electronic controls

    NASA Technical Reports Server (NTRS)

    Newirth, D. M.; Koenig, E. W.

    1976-01-01

    A trade study was performed on twenty-one digital output interface schemes for gas turbine electronic controls to select the most promising scheme based on criteria of reliability, performance, cost, and sampling requirements. The most promising scheme, a digital effector with optical feedback of the fuel metering valve position, was designed.

  14. Selective classification for improved robustness of myoelectric control under nonideal conditions.

    PubMed

    Scheme, Erik J; Englehart, Kevin B; Hudgins, Bernard S

    2011-06-01

    Recent literature in pattern recognition-based myoelectric control has highlighted a disparity between classification accuracy and the usability of upper limb prostheses. This paper suggests that the conventionally defined classification accuracy may be idealistic and may not reflect true clinical performance. Herein, a novel myoelectric control system based on a selective multiclass one-versus-one classification scheme, capable of rejecting unknown data patterns, is introduced. This scheme is shown to outperform nine other popular classifiers when compared using conventional classification accuracy as well as a form of leave-one-out analysis that may be more representative of real prosthetic use. Additionally, the classification scheme allows for real-time, independent adjustment of individual class-pair boundaries making it flexible and intuitive for clinical use.

  15. ECG compression using non-recursive wavelet transform with quality control

    NASA Astrophysics Data System (ADS)

    Liu, Je-Hung; Hung, King-Chu; Wu, Tsung-Ching

    2016-09-01

    While wavelet-based electrocardiogram (ECG) data compression using scalar quantisation (SQ) yields excellent compression performance, a wavelet's SQ scheme, however, must select a set of multilevel quantisers for each quantisation process. As a result of the properties of multiple-to-one mapping, however, this scheme is not conducive for reconstruction error control. In order to address this problem, this paper presents a single-variable control SQ scheme able to guarantee the reconstruction quality of wavelet-based ECG data compression. Based on the reversible round-off non-recursive discrete periodised wavelet transform (RRO-NRDPWT), the SQ scheme is derived with a three-stage design process that first uses genetic algorithm (GA) for high compression ratio (CR), followed by a quadratic curve fitting for linear distortion control, and the third uses a fuzzy decision-making for minimising data dependency effect and selecting the optimal SQ. The two databases, Physikalisch-Technische Bundesanstalt (PTB) and Massachusetts Institute of Technology (MIT) arrhythmia, are used to evaluate quality control performance. Experimental results show that the design method guarantees a high compression performance SQ scheme with statistically linear distortion. This property can be independent of training data and can facilitate rapid error control.

  16. Space shuttle orbit maneuvering engine reusable thrust chamber program

    NASA Technical Reports Server (NTRS)

    Senneff, J. M.

    1975-01-01

    The feasibility of potential reusable thrust chamber concepts is studied. Propellant condidates were examined and analytically combined with potential cooling schemes. A data base of engine data which would assist in a configuration selection was produced. The data base verification was performed by the demonstration of a thrust chamber of a selected coolant scheme design. A full scale insulated columbium thrust chamber was used for propellant coolant configurations. Combustion stability of the injectors and a reduced size thrust chamber were experimentally verified as proof of concept demonstrations of the design and study results.

  17. A novel evaluation method for building construction project based on integrated information entropy with reliability theory.

    PubMed

    Bai, Xiao-ping; Zhang, Xi-wei

    2013-01-01

    Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.

  18. Dispersion-based Fresh-slice Scheme for Free-Electron Lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guetg, Marc

    The Fresh-slice technique improved the performance of several Self-Amplified Spontaneous Emission Free-Electron laser schemes by granting selective control on the temporal lasing slice without spoiling the other electron bunch slices. So far, the implementation required a special insertion device to create the beam yaw, called dechirper. We demonstrate a novel scheme to enable Freshslice operation based on electron energy chirp and orbit dispersion that can be implemented at any free-electron laser facility without additional hardware.

  19. Compiler-directed cache management in multiprocessors

    NASA Technical Reports Server (NTRS)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  20. A novel hybrid approach with multidimensional-like effects for compressible flow computations

    NASA Astrophysics Data System (ADS)

    Kalita, Paragmoni; Dass, Anoop K.

    2017-07-01

    A multidimensional scheme achieves good resolution of strong and weak shocks irrespective of whether the discontinuities are aligned with or inclined to the grid. However, these schemes are computationally expensive. This paper achieves similar effects by hybridizing two schemes, namely, AUSM and DRLLF and coupling them through a novel shock switch that operates - unlike existing switches - on the gradient of the Mach number across the cell-interface. The schemes that are hybridized have contrasting properties. The AUSM scheme captures grid-aligned (and strong) shocks crisply but it is not so good for non-grid-aligned weaker shocks, whereas the DRLLF scheme achieves sharp resolution of non-grid-aligned weaker shocks, but is not as good for grid-aligned strong shocks. It is our experience that if conventional shock switches based on variables like density, pressure or Mach number are used to combine the schemes, the desired effect of crisp resolution of grid-aligned and non-grid-aligned discontinuities are not obtained. To circumvent this problem we design a shock switch based - for the first time - on the gradient of the cell-interface Mach number with very impressive results. Thus the strategy of hybridizing two carefully selected schemes together with the innovative design of the shock switch that couples them, affords a method that produces the effects of a multidimensional scheme with a lower computational cost. It is further seen that hybridization of the AUSM scheme with the recently developed DRLLFV scheme using the present shock switch gives another scheme that provides crisp resolution for both shocks and boundary layers. Merits of the scheme are established through a carefully selected set of numerical experiments.

  1. Student’s scheme in solving mathematics problems

    NASA Astrophysics Data System (ADS)

    Setyaningsih, Nining; Juniati, Dwi; Suwarsono

    2018-03-01

    The purpose of this study was to investigate students’ scheme in solving mathematics problems. Scheme are data structures for representing the concepts stored in memory. In this study, we used it in solving mathematics problems, especially ratio and proportion topics. Scheme is related to problem solving that assumes that a system is developed in the human mind by acquiring a structure in which problem solving procedures are integrated with some concepts. The data were collected by interview and students’ written works. The results of this study revealed are students’ scheme in solving the problem of ratio and proportion as follows: (1) the content scheme, where students can describe the selected components of the problem according to their prior knowledge, (2) the formal scheme, where students can explain in construct a mental model based on components that have been selected from the problem and can use existing schemes to build planning steps, create something that will be used to solve problems and (3) the language scheme, where students can identify terms, or symbols of the components of the problem.Therefore, by using the different strategies to solve the problems, the students’ scheme in solving the ratio and proportion problems will also differ.

  2. Adverse Selection in Community Based Health Insurance among Informal Workers in Bangladesh: An EQ-5D Assessment

    PubMed Central

    Sarker, Abdur Razzaque; Sultana, Marufa; Chakrovorty, Sanchita; Khan, Jahangir A. M.

    2018-01-01

    Community-based Health Insurance (CBHI) schemes are recommended for providing financial risk protection to low-income informal workers in Bangladesh. We assessed the problem of adverse selection in a pilot CBHI scheme in this context. In total, 1292 (646 insured and 646 uninsured) respondents were surveyed using the Bengali version of the EuroQuol-5 dimensions (EQ-5D) questionnaire for assessing their health status. The EQ-5D scores were estimated using available regional tariffs. Multiple logistic regression was applied for predicting the association between health status and CBHI scheme enrolment. A higher number of insured reported problems in mobility (7.3%; p = 0.002); self-care (7.1%; p = 0.000) and pain and discomfort (7.7%; p = 0.005) than uninsured. The average EQ-5D score was significantly lower among the insured (0.704) compared to the uninsured (0.749). The regression analysis showed that those who had a problem in mobility (OR = 1.65; 95% CI: 1.25–2.17); self-care (OR = 2.29; 95% CI: 1.62–3.25) and pain and discomfort (OR = 1.43; 95% CI: 1.13–1.81) were more likely to join the scheme. Individuals with higher EQ-5D scores (OR = 0.46; 95% CI: 0.31–0.69) were less likely to enroll in the scheme. Given that adverse selection was evident in the pilot CBHI scheme, there should be consideration of this problem when planning scale-up of these kind of schemes. PMID:29385072

  3. Electroreduction-based electrochemical-enzymatic redox cycling for the detection of cancer antigen 15-3 using graphene oxide-modified indium-tin oxide electrodes.

    PubMed

    Park, Seonhwa; Singh, Amardeep; Kim, Sinyoung; Yang, Haesik

    2014-02-04

    We compare herein biosensing performance of two electroreduction-based electrochemical-enzymatic (EN) redox-cycling schemes [the redox cycling combined with simultaneous enzymatic amplification (one-enzyme scheme) and the redox cycling combined with preceding enzymatic amplification (two-enzyme scheme)]. To minimize unwanted side reactions in the two-enzyme scheme, β-galactosidase (Gal) and tyrosinase (Tyr) are selected as an enzyme label and a redox enzyme, respectively, and Tyr is selected as a redox enzyme label in the one-enzyme scheme. The signal amplification in the one-enzyme scheme consists of (i) enzymatic oxidation of catechol into o-benzoquinone by Tyr and (ii) electroreduction-based EN redox cycling of o-benzoquinone. The signal amplification in the two-enzyme scheme consists of (i) enzymatic conversion of phenyl β-d-galactopyranoside into phenol by Gal, (ii) enzymatic oxidation of phenol into catechol by Tyr, and (iii) electroreduction-based EN redox cycling of o-benzoquinone including further enzymatic oxidation of catechol to o-benzoquinone by Tyr. Graphene oxide-modified indium-tin oxide (GO/ITO) electrodes, simply prepared by immersing ITO electrodes in a GO-dispersed aqueous solution, are used to obtain better electrocatalytic activities toward o-benzoquinone reduction than bare ITO electrodes. The detection limits for mouse IgG, measured with GO/ITO electrodes, are lower than when measured with bare ITO electrodes. Importantly, the detection of mouse IgG using the two-enzyme scheme allows lower detection limits than that using the one-enzyme scheme, because the former gives higher signal levels at low target concentrations although the former gives lower signal levels at high concentrations. The detection limit for cancer antigen (CA) 15-3, a biomarker of breast cancer, measured using the two-enzyme scheme and GO/ITO electrodes is ca. 0.1 U/mL, indicating that the immunosensor is highly sensitive.

  4. A fast chaos-based image encryption scheme with a dynamic state variables selection mechanism

    NASA Astrophysics Data System (ADS)

    Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo

    2015-03-01

    In recent years, a variety of chaos-based image cryptosystems have been investigated to meet the increasing demand for real-time secure image transmission. Most of them are based on permutation-diffusion architecture, in which permutation and diffusion are two independent procedures with fixed control parameters. This property results in two flaws. (1) At least two chaotic state variables are required for encrypting one plain pixel, in permutation and diffusion stages respectively. Chaotic state variables produced with high computation complexity are not sufficiently used. (2) The key stream solely depends on the secret key, and hence the cryptosystem is vulnerable against known/chosen-plaintext attacks. In this paper, a fast chaos-based image encryption scheme with a dynamic state variables selection mechanism is proposed to enhance the security and promote the efficiency of chaos-based image cryptosystems. Experimental simulations and extensive cryptanalysis have been carried out and the results prove the superior security and high efficiency of the scheme.

  5. Effective W-state fusion strategies for electronic and photonic qubits via the quantum-dot-microcavity coupled system.

    PubMed

    Han, Xue; Hu, Shi; Guo, Qi; Wang, Hong-Fu; Zhu, Ai-Dong; Zhang, Shou

    2015-08-05

    We propose effective fusion schemes for stationary electronic W state and flying photonic W state, respectively, by using the quantum-dot-microcavity coupled system. The present schemes can fuse a n-qubit W state and a m-qubit W state to a (m + n - 1)-qubit W state, that is, these schemes can be used to not only create large W state with small ones, but also to prepare 3-qubit W states with Bell states. The schemes are based on the optical selection rules and the transmission and reflection rules of the cavity and can be achieved with high probability. We evaluate the effect of experimental imperfections and the feasibility of the schemes, which shows that the present schemes can be realized with high fidelity in both the weak coupling and the strong coupling regimes. These schemes may be meaningful for the large-scale solid-state-based quantum computation and the photon-qubit-based quantum communication.

  6. Performance Analysis of Cluster Formation in Wireless Sensor Networks.

    PubMed

    Montiel, Edgar Romo; Rivero-Angeles, Mario E; Rubino, Gerardo; Molina-Lozano, Heron; Menchaca-Mendez, Rolando; Menchaca-Mendez, Ricardo

    2017-12-13

    Clustered-based wireless sensor networks have been extensively used in the literature in order to achieve considerable energy consumption reductions. However, two aspects of such systems have been largely overlooked. Namely, the transmission probability used during the cluster formation phase and the way in which cluster heads are selected. Both of these issues have an important impact on the performance of the system. For the former, it is common to consider that sensor nodes in a clustered-based Wireless Sensor Network (WSN) use a fixed transmission probability to send control data in order to build the clusters. However, due to the highly variable conditions experienced by these networks, a fixed transmission probability may lead to extra energy consumption. In view of this, three different transmission probability strategies are studied: optimal, fixed and adaptive. In this context, we also investigate cluster head selection schemes, specifically, we consider two intelligent schemes based on the fuzzy C-means and k-medoids algorithms and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different transmission probability schemes.

  7. Performance Analysis of Cluster Formation in Wireless Sensor Networks

    PubMed Central

    Montiel, Edgar Romo; Rivero-Angeles, Mario E.; Rubino, Gerardo; Molina-Lozano, Heron; Menchaca-Mendez, Rolando; Menchaca-Mendez, Ricardo

    2017-01-01

    Clustered-based wireless sensor networks have been extensively used in the literature in order to achieve considerable energy consumption reductions. However, two aspects of such systems have been largely overlooked. Namely, the transmission probability used during the cluster formation phase and the way in which cluster heads are selected. Both of these issues have an important impact on the performance of the system. For the former, it is common to consider that sensor nodes in a clustered-based Wireless Sensor Network (WSN) use a fixed transmission probability to send control data in order to build the clusters. However, due to the highly variable conditions experienced by these networks, a fixed transmission probability may lead to extra energy consumption. In view of this, three different transmission probability strategies are studied: optimal, fixed and adaptive. In this context, we also investigate cluster head selection schemes, specifically, we consider two intelligent schemes based on the fuzzy C-means and k-medoids algorithms and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different transmission probability schemes. PMID:29236065

  8. Optimization of VPSC Model Parameters for Two-Phase Titanium Alloys: Flow Stress Vs Orientation Distribution Function Metrics

    NASA Astrophysics Data System (ADS)

    Miller, V. M.; Semiatin, S. L.; Szczepanski, C.; Pilchak, A. L.

    2018-06-01

    The ability to predict the evolution of crystallographic texture during hot work of titanium alloys in the α + β temperature regime is greatly significant to numerous engineering disciplines; however, research efforts are complicated by the rapid changes in phase volume fractions and flow stresses with temperature in addition to topological considerations. The viscoplastic self-consistent (VPSC) polycrystal plasticity model is employed to simulate deformation in the two phase field. Newly developed parameter selection schemes utilizing automated optimization based on two different error metrics are considered. In the first optimization scheme, which is commonly used in the literature, the VPSC parameters are selected based on the quality of fit between experiment and simulated flow curves at six hot-working temperatures. Under the second newly developed scheme, parameters are selected to minimize the difference between the simulated and experimentally measured α textures after accounting for the β → α transformation upon cooling. It is demonstrated that both methods result in good qualitative matches for the experimental α phase texture, but texture-based optimization results in a substantially better quantitative orientation distribution function match.

  9. A Negative Selection Immune System Inspired Methodology for Fault Diagnosis of Wind Turbines.

    PubMed

    Alizadeh, Esmaeil; Meskin, Nader; Khorasani, Khashayar

    2017-11-01

    High operational and maintenance costs represent as major economic constraints in the wind turbine (WT) industry. These concerns have made investigation into fault diagnosis of WT systems an extremely important and active area of research. In this paper, an immune system (IS) inspired methodology for performing fault detection and isolation (FDI) of a WT system is proposed and developed. The proposed scheme is based on a self nonself discrimination paradigm of a biological IS. Specifically, the negative selection mechanism [negative selection algorithm (NSA)] of the human body is utilized. In this paper, a hierarchical bank of NSAs are designed to detect and isolate both individual as well as simultaneously occurring faults common to the WTs. A smoothing moving window filter is then utilized to further improve the reliability and performance of the FDI scheme. Moreover, the performance of our proposed scheme is compared with another state-of-the-art data-driven technique, namely the support vector machines (SVMs) to demonstrate and illustrate the superiority and advantages of our proposed NSA-based FDI scheme. Finally, a nonparametric statistical comparison test is implemented to evaluate our proposed methodology with that of the SVM under various fault severities.

  10. Classification schemes for knowledge translation interventions: a practical resource for researchers.

    PubMed

    Slaughter, Susan E; Zimmermann, Gabrielle L; Nuspl, Megan; Hanson, Heather M; Albrecht, Lauren; Esmail, Rosmin; Sauro, Khara; Newton, Amanda S; Donald, Maoliosa; Dyson, Michele P; Thomson, Denise; Hartling, Lisa

    2017-12-06

    As implementation science advances, the number of interventions to promote the translation of evidence into healthcare, health systems, or health policy is growing. Accordingly, classification schemes for these knowledge translation (KT) interventions have emerged. A recent scoping review identified 51 classification schemes of KT interventions to integrate evidence into healthcare practice; however, the review did not evaluate the quality of the classification schemes or provide detailed information to assist researchers in selecting a scheme for their context and purpose. This study aimed to further examine and assess the quality of these classification schemes of KT interventions, and provide information to aid researchers when selecting a classification scheme. We abstracted the following information from each of the original 51 classification scheme articles: authors' objectives; purpose of the scheme and field of application; socioecologic level (individual, organizational, community, system); adaptability (broad versus specific); target group (patients, providers, policy-makers), intent (policy, education, practice), and purpose (dissemination versus implementation). Two reviewers independently evaluated the methodological quality of the development of each classification scheme using an adapted version of the AGREE II tool. Based on these assessments, two independent reviewers reached consensus about whether to recommend each scheme for researcher use, or not. Of the 51 original classification schemes, we excluded seven that were not specific classification schemes, not accessible or duplicates. Of the remaining 44 classification schemes, nine were not recommended. Of the 35 recommended classification schemes, ten focused on behaviour change and six focused on population health. Many schemes (n = 29) addressed practice considerations. Fewer schemes addressed educational or policy objectives. Twenty-five classification schemes had broad applicability, six were specific, and four had elements of both. Twenty-three schemes targeted health providers, nine targeted both patients and providers and one targeted policy-makers. Most classification schemes were intended for implementation rather than dissemination. Thirty-five classification schemes of KT interventions were developed and reported with sufficient rigour to be recommended for use by researchers interested in KT in healthcare. Our additional categorization and quality analysis will aid in selecting suitable classification schemes for research initiatives in the field of implementation science.

  11. Oriented regions grouping based candidate proposal for infrared pedestrian detection

    NASA Astrophysics Data System (ADS)

    Wang, Jiangtao; Zhang, Jingai; Li, Huaijiang

    2018-04-01

    Effectively and accurately locating the positions of pedestrian candidates in image is a key task for the infrared pedestrian detection system. In this work, a novel similarity measuring metric is designed. Based on the selective search scheme, the developed similarity measuring metric is utilized to yield the possible locations for pedestrian candidate. Besides this, corresponding diversification strategies are also provided according to the characteristics of the infrared thermal imaging system. Experimental results indicate that the presented scheme can achieve more efficient outputs than the traditional selective search methodology for the infrared pedestrian detection task.

  12. A diagnostic signal selection scheme for planetary gearbox vibration monitoring under non-stationary operational conditions

    NASA Astrophysics Data System (ADS)

    Feng, Ke; Wang, KeSheng; Zhang, Mian; Ni, Qing; Zuo, Ming J.

    2017-03-01

    The planetary gearbox, due to its unique mechanical structures, is an important rotating machine for transmission systems. Its engineering applications are often in non-stationary operational conditions, such as helicopters, wind energy systems, etc. The unique physical structures and working conditions make the vibrations measured from planetary gearboxes exhibit a complex time-varying modulation and therefore yield complicated spectral structures. As a result, traditional signal processing methods, such as Fourier analysis, and the selection of characteristic fault frequencies for diagnosis face serious challenges. To overcome this drawback, this paper proposes a signal selection scheme for fault-emphasized diagnostics based upon two order tracking techniques. The basic procedures for the proposed scheme are as follows. (1) Computed order tracking is applied to reveal the order contents and identify the order(s) of interest. (2) Vold-Kalman filter order tracking is used to extract the order(s) of interest—these filtered order(s) constitute the so-called selected vibrations. (3) Time domain statistic indicators are applied to the selected vibrations for faulty information-emphasized diagnostics. The proposed scheme is explained and demonstrated in a signal simulation model and experimental studies and the method proves to be effective for planetary gearbox fault diagnosis.

  13. Chain-Based Communication in Cylindrical Underwater Wireless Sensor Networks

    PubMed Central

    Javaid, Nadeem; Jafri, Mohsin Raza; Khan, Zahoor Ali; Alrajeh, Nabil; Imran, Muhammad; Vasilakos, Athanasios

    2015-01-01

    Appropriate network design is very significant for Underwater Wireless Sensor Networks (UWSNs). Application-oriented UWSNs are planned to achieve certain objectives. Therefore, there is always a demand for efficient data routing schemes, which can fulfill certain requirements of application-oriented UWSNs. These networks can be of any shape, i.e., rectangular, cylindrical or square. In this paper, we propose chain-based routing schemes for application-oriented cylindrical networks and also formulate mathematical models to find a global optimum path for data transmission. In the first scheme, we devise four interconnected chains of sensor nodes to perform data communication. In the second scheme, we propose routing scheme in which two chains of sensor nodes are interconnected, whereas in third scheme single-chain based routing is done in cylindrical networks. After finding local optimum paths in separate chains, we find global optimum paths through their interconnection. Moreover, we develop a computational model for the analysis of end-to-end delay. We compare the performance of the above three proposed schemes with that of Power Efficient Gathering System in Sensor Information Systems (PEGASIS) and Congestion adjusted PEGASIS (C-PEGASIS). Simulation results show that our proposed 4-chain based scheme performs better than the other selected schemes in terms of network lifetime, end-to-end delay, path loss, transmission loss, and packet sending rate. PMID:25658394

  14. Does typing of Chlamydia trachomatis using housekeeping multilocus sequence typing reveal different sexual networks among heterosexuals and men who have sex with men?

    PubMed

    Versteeg, Bart; Bruisten, Sylvia M; van der Ende, Arie; Pannekoek, Yvonne

    2016-04-18

    Chlamydia trachomatis infections remain the most common bacterial sexually transmitted infection worldwide. To gain more insight into the epidemiology and transmission of C. trachomatis, several schemes of multilocus sequence typing (MLST) have been developed. We investigated the clustering of C. trachomatis strains derived from men who have sex with men (MSM) and heterosexuals using the MLST scheme based on 7 housekeeping genes (MLST-7) adapted for clinical specimens and a high-resolution MLST scheme based on 6 polymorphic genes, including ompA (hr-MLST-6). Specimens from 100 C. trachomatis infected men who have sex with men (MSM) and 100 heterosexual women were randomly selected from previous studies and sequenced. We adapted the MLST-7 scheme to a nested assay to be suitable for direct typing of clinical specimens. All selected specimens were typed using both the adapted MLST-7 scheme and the hr-MLST-6 scheme. Clustering of C. trachomatis strains derived from MSM and heterosexuals was assessed using minimum spanning tree analysis. Sufficient chlamydial DNA was present in 188 of the 200 (94 %) selected samples. Using the adapted MLST-7 scheme, full MLST profiles were obtained for 187 of 188 tested specimens resulting in a high success rate of 99.5 %. Of these 187 specimens, 91 (48.7 %) were from MSM and 96 (51.3 %) from heterosexuals. We detected 21 sequence types (STs) using the adapted MLST-7 and 79 STs using the hr-MLST-6 scheme. Minimum spanning tree analyses was used to examine the clustering of MLST-7 data, which showed no reflection of separate transmission in MSM and heterosexual hosts. Moreover, typing using the hr-MLST-6 scheme identified genetically related clusters within each of clusters that were identified by using the MLST-7 scheme. No distinct transmission of C. trachomatis could be observed in MSM and heterosexuals using the adapted MLST-7 scheme in contrast to using the hr-MLST-6. In addition, we compared clustering of both MLST schemes and demonstrated that typing using the hr-MLST-6 scheme is able to identify genetically related clusters of C. trachomatis strains within each of the clusters that were identified by using the MLST-7 scheme.

  15. Selection against canine hip dysplasia: success or failure?

    PubMed

    Wilson, Bethany; Nicholas, Frank W; Thomson, Peter C

    2011-08-01

    Canine hip dysplasia (CHD) is a multifactorial skeletal disorder which is very common in pedigree dogs and represents a huge concern for canine welfare. Control schemes based on selective breeding have been in operation for decades. The aim of these schemes is to reduce the impact of CHD on canine welfare by selecting for reduced radiographic evidence of CHD pathology as assessed by a variety of phenotypes. There is less information regarding the genotypic correlation between these phenotypes and the impact of CHD on canine welfare. Although the phenotypes chosen as the basis for these control schemes have displayed heritable phenotypic variation in many studies, success in achieving improvement in the phenotypes has been mixed. There is significant room for improvement in the current schemes through the use of estimated breeding values (EBVs), which can combine a dog's CHD phenotype with CHD phenotypes of relatives, other phenotypes as they are proven to be genetically correlated with CHD (especially elbow dysplasia phenotypes), and information from genetic tests for population-relevant DNA markers, as such tests become available. Additionally, breed clubs should be encouraged and assisted to formulate rational, evidenced-based breeding recommendations for CHD which suit their individual circumstances and dynamically to adjust the breeding recommendations based on continuous tracking of CHD genetic trends. These improvements can assist in safely and effectively reducing the impact of CHD on pedigree dog welfare. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Evaluation of effectiveness of wavelet based denoising schemes using ANN and SVM for bearing condition classification.

    PubMed

    Vijay, G S; Kumar, H S; Srinivasa Pai, P; Sriram, N S; Rao, Raj B K N

    2012-01-01

    The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal.

  17. Transport and energy selection of laser generated protons for postacceleration with a compact linac

    NASA Astrophysics Data System (ADS)

    Sinigardi, Stefano; Turchetti, Giorgio; Londrillo, Pasquale; Rossi, Francesco; Giove, Dario; De Martinis, Carlo; Sumini, Marco

    2013-03-01

    Laser accelerated proton beams have a considerable potential for various applications including oncological therapy. However, the most consolidated target normal sheath acceleration regime based on irradiation of solid targets provides an exponential energy spectrum with a significant divergence. The low count number at the cutoff energy seriously limits at present its possible use. One realistic scenario for the near future is offered by hybrid schemes. The use of transport lines for collimation and energy selection has been considered. We present here a scheme based on a high field pulsed solenoid and collimators which allows one to select a beam suitable for injection at 30 MeV into a compact linac in order to double its energy while preserving a significant intensity. The results are based on a fully 3D simulation starting from laser acceleration.

  18. Book Selection, Collection Development, and Bounded Rationality.

    ERIC Educational Resources Information Center

    Schwartz, Charles A.

    1989-01-01

    Reviews previously proposed schemes of classical rationality in book selection, describes new approaches to rational choice behavior, and presents a model of book selection based on bounded rationality in a garbage can decision process. The role of tacit knowledge and symbolic content in the selection process are also discussed. (102 references)…

  19. Performance Analysis of Transmit Diversity Systems with Multiple Antenna Replacement

    NASA Astrophysics Data System (ADS)

    Park, Ki-Hong; Yang, Hong-Chuan; Ko, Young-Chai

    Transmit diversity systems based on orthogonal space-time block coding (OSTBC) usually suffer from rate loss and power spreading. Proper antenna selection scheme can help to more effectively utilize the transmit antennas and transmission power in such systems. In this paper, we propose a new antenna selection scheme for such systems based on the idea of antenna switching. In particular, targeting at reducing the number of pilot channels and RF chains, the transmitter now replaces the antennas with the lowest received SNR with unused ones if the output SNR of space time decoder at the receiver is below a certain threshold. With this new scheme, not only the number of pilot channels and RF chains to be implemented is decreased, the average amount of feedback information is also reduced. To analyze the performance of this scheme, we derive the exact integral closed form for the probability density function (PDF) of the received SNR. We show through numerical examples that the proposed scheme offers better performance than traditional OSTBC systems using all available transmitting antennas, with a small amount of feedback information. We also examine the effect of different antenna configuration and feedback delay.

  20. Performance Analysis of Physical Layer Security of Opportunistic Scheduling in Multiuser Multirelay Cooperative Networks

    PubMed Central

    Shim, Kyusung; Do, Nhu Tri; An, Beongku

    2017-01-01

    In this paper, we study the physical layer security (PLS) of opportunistic scheduling for uplink scenarios of multiuser multirelay cooperative networks. To this end, we propose a low-complexity, yet comparable secrecy performance source relay selection scheme, called the proposed source relay selection (PSRS) scheme. Specifically, the PSRS scheme first selects the least vulnerable source and then selects the relay that maximizes the system secrecy capacity for the given selected source. Additionally, the maximal ratio combining (MRC) technique and the selection combining (SC) technique are considered at the eavesdropper, respectively. Investigating the system performance in terms of secrecy outage probability (SOP), closed-form expressions of the SOP are derived. The developed analysis is corroborated through Monte Carlo simulation. Numerical results show that the PSRS scheme significantly improves the secure ability of the system compared to that of the random source relay selection scheme, but does not outperform the optimal joint source relay selection (OJSRS) scheme. However, the PSRS scheme drastically reduces the required amount of channel state information (CSI) estimations compared to that required by the OJSRS scheme, specially in dense cooperative networks. PMID:28212286

  1. Relevance popularity: A term event model based feature selection scheme for text classification.

    PubMed

    Feng, Guozhong; An, Baiguo; Yang, Fengqin; Wang, Han; Zhang, Libiao

    2017-01-01

    Feature selection is a practical approach for improving the performance of text classification methods by optimizing the feature subsets input to classifiers. In traditional feature selection methods such as information gain and chi-square, the number of documents that contain a particular term (i.e. the document frequency) is often used. However, the frequency of a given term appearing in each document has not been fully investigated, even though it is a promising feature to produce accurate classifications. In this paper, we propose a new feature selection scheme based on a term event Multinomial naive Bayes probabilistic model. According to the model assumptions, the matching score function, which is based on the prediction probability ratio, can be factorized. Finally, we derive a feature selection measurement for each term after replacing inner parameters by their estimators. On a benchmark English text datasets (20 Newsgroups) and a Chinese text dataset (MPH-20), our numerical experiment results obtained from using two widely used text classifiers (naive Bayes and support vector machine) demonstrate that our method outperformed the representative feature selection methods.

  2. Phase 1 engineering and technical data report for the thermal control extravehicular life support system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A shuttle EVLSS Thermal Control System (TCS) is defined. Thirteen heat rejection subsystems, thirteen water management subsystems, nine humidity control subsystems, three pressure control schemes and five temperature control schemes are evaluated. Sixteen integrated TCS systems are studied, and an optimum system is selected based on quantitative weighting of weight, volume, cost, complexity and other factors. The selected sybsystem contains a sublimator for heat rejection, a bubble expansion tank for water management, and a slurper and rotary separator for humidity control. Design of the selected subsystem prototype hardware is presented.

  3. Optimisation of colour schemes to accurately display mass spectrometry imaging data based on human colour perception.

    PubMed

    Race, Alan M; Bunch, Josephine

    2015-03-01

    The choice of colour scheme used to present data can have a dramatic effect on the perceived structure present within the data. This is of particular significance in mass spectrometry imaging (MSI), where ion images that provide 2D distributions of a wide range of analytes are used to draw conclusions about the observed system. Commonly employed colour schemes are generally suboptimal for providing an accurate representation of the maximum amount of data. Rainbow-based colour schemes are extremely popular within the community, but they introduce well-documented artefacts which can be actively misleading in the interpretation of the data. In this article, we consider the suitability of colour schemes and composite image formation found in MSI literature in the context of human colour perception. We also discuss recommendations of rules for colour scheme selection for ion composites and multivariate analysis techniques such as principal component analysis (PCA).

  4. Total Variation Diminishing (TVD) schemes of uniform accuracy

    NASA Technical Reports Server (NTRS)

    Hartwich, PETER-M.; Hsu, Chung-Hao; Liu, C. H.

    1988-01-01

    Explicit second-order accurate finite-difference schemes for the approximation of hyperbolic conservation laws are presented. These schemes are nonlinear even for the constant coefficient case. They are based on first-order upwind schemes. Their accuracy is enhanced by locally replacing the first-order one-sided differences with either second-order one-sided differences or central differences or a blend thereof. The appropriate local difference stencils are selected such that they give TVD schemes of uniform second-order accuracy in the scalar, or linear systems, case. Like conventional TVD schemes, the new schemes avoid a Gibbs phenomenon at discontinuities of the solution, but they do not switch back to first-order accuracy, in the sense of truncation error, at extrema of the solution. The performance of the new schemes is demonstrated in several numerical tests.

  5. Quantum key distribution with passive decoy state selection

    NASA Astrophysics Data System (ADS)

    Mauerer, Wolfgang; Silberhorn, Christine

    2007-05-01

    We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.

  6. Cluster analysis based on dimensional information with applications to feature selection and classification

    NASA Technical Reports Server (NTRS)

    Eigen, D. J.; Fromm, F. R.; Northouse, R. A.

    1974-01-01

    A new clustering algorithm is presented that is based on dimensional information. The algorithm includes an inherent feature selection criterion, which is discussed. Further, a heuristic method for choosing the proper number of intervals for a frequency distribution histogram, a feature necessary for the algorithm, is presented. The algorithm, although usable as a stand-alone clustering technique, is then utilized as a global approximator. Local clustering techniques and configuration of a global-local scheme are discussed, and finally the complete global-local and feature selector configuration is shown in application to a real-time adaptive classification scheme for the analysis of remote sensed multispectral scanner data.

  7. A frame selective dynamic programming approach for noise robust pitch estimation.

    PubMed

    Yarra, Chiranjeevi; Deshmukh, Om D; Ghosh, Prasanta Kumar

    2018-04-01

    The principles of the existing pitch estimation techniques are often different and complementary in nature. In this work, a frame selective dynamic programming (FSDP) method is proposed which exploits the complementary characteristics of two existing methods, namely, sub-harmonic to harmonic ratio (SHR) and sawtooth-wave inspired pitch estimator (SWIPE). Using variants of SHR and SWIPE, the proposed FSDP method classifies all the voiced frames into two classes-the first class consists of the frames where a confidence score maximization criterion is used for pitch estimation, while for the second class, a dynamic programming (DP) based approach is proposed. Experiments are performed on speech signals separately from KEELE, CSLU, and PaulBaghsaw corpora under clean and additive white Gaussian noise at 20, 10, 5, and 0 dB SNR conditions using four baseline schemes including SHR, SWIPE, and two DP based techniques. The pitch estimation performance of FSDP, when averaged over all SNRs, is found to be better than those of the baseline schemes suggesting the benefit of applying smoothness constraint using DP in selected frames in the proposed FSDP scheme. The VuV classification error from FSDP is also found to be lower than that from all four baseline schemes in almost all SNR conditions on three corpora.

  8. Evaluation of selective control information detection scheme in orthogonal frequency division multiplexing-based radio-over-fiber and visible light communication links

    NASA Astrophysics Data System (ADS)

    Dalarmelina, Carlos A.; Adegbite, Saheed A.; Pereira, Esequiel da V.; Nunes, Reginaldo B.; Rocha, Helder R. O.; Segatto, Marcelo E. V.; Silva, Jair A. L.

    2017-05-01

    Block-level detection is required to decode what may be classified as selective control information (SCI) such as control format indicator in 4G-long-term evolution systems. Using optical orthogonal frequency division multiplexing over radio-over-fiber (RoF) links, we report the experimental evaluation of an SCI detection scheme based on a time-domain correlation (TDC) technique in comparison with the conventional maximum likelihood (ML) approach. When compared with the ML method, it is shown that the TDC method improves detection performance over both 20 and 40 km of standard single mode fiber (SSMF) links. We also report a performance analysis of the TDC scheme in noisy visible light communication channel models after propagation through 40 km of SSMF. Experimental and simulation results confirm that the TDC method is attractive for practical orthogonal frequency division multiplexing-based RoF and fiber-wireless systems. Unlike the ML method, another key benefit of the TDC is that it requires no channel estimation.

  9. An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks

    PubMed Central

    Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches. PMID:25574490

  10. An adaptive handover prediction scheme for seamless mobility based wireless networks.

    PubMed

    Sadiq, Ali Safa; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.

  11. Development and selection of Asian-specific humeral implants based on statistical atlas: toward planning minimally invasive surgery.

    PubMed

    Wu, K; Daruwalla, Z J; Wong, K L; Murphy, D; Ren, H

    2015-08-01

    The commercial humeral implants based on the Western population are currently not entirely compatible with Asian patients, due to differences in bone size, shape and structure. Surgeons may have to compromise or use different implants that are less conforming, which may cause complications of as well as inconvenience to the implant position. The construction of Asian humerus atlases of different clusters has therefore been proposed to eradicate this problem and to facilitate planning minimally invasive surgical procedures [6,31]. According to the features of the atlases, new implants could be designed specifically for different patients. Furthermore, an automatic implant selection algorithm has been proposed as well in order to reduce the complications caused by implant and bone mismatch. Prior to the design of the implant, data clustering and extraction of the relevant features were carried out on the datasets of each gender. The fuzzy C-means clustering method is explored in this paper. Besides, two new schemes of implant selection procedures, namely the Procrustes analysis-based scheme and the group average distance-based scheme, were proposed to better search for the matching implants for new coming patients from the database. Both these two algorithms have not been used in this area, while they turn out to have excellent performance in implant selection. Additionally, algorithms to calculate the matching scores between various implants and the patient data are proposed in this paper to assist the implant selection procedure. The results obtained have indicated the feasibility of the proposed development and selection scheme. The 16 sets of male data were divided into two clusters with 8 and 8 subjects, respectively, and the 11 female datasets were also divided into two clusters with 5 and 6 subjects, respectively. Based on the features of each cluster, the implants designed by the proposed algorithm fit very well on their reference humeri and the proposed implant selection procedure allows for a scenario of treating a patient with merely a preoperative anatomical model in order to correctly select the implant that has the best fit. Based on the leave-one-out validation, it can be concluded that both the PA-based method and GAD-based method are able to achieve excellent performance when dealing with the problem of implant selection. The accuracy and average execution time for the PA-based method were 100 % and 0.132 s, respectively, while those of the GAD- based method were 100 % and 0.058 s. Therefore, the GAD-based method outperformed the PA-based method in terms of execution speed. The primary contributions of this paper include the proposal of methods for development of Asian-, gender- and cluster-specific implants based on shape features and selection of the best fit implants for future patients according to their features. To the best of our knowledge, this is the first work that proposes implant design and selection for Asian patients automatically based on features extracted from cluster-specific statistical atlases.

  12. Fixing Dataset Search

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris

    2014-01-01

    Three current search engines are queried for ozone data at the GES DISC. The results range from sub-optimal to counter-intuitive. We propose a method to fix dataset search by implementing a robust relevancy ranking scheme. The relevancy ranking scheme is based on several heuristics culled from more than 20 years of helping users select datasets.

  13. A Lifetime Maximization Relay Selection Scheme in Wireless Body Area Networks.

    PubMed

    Zhang, Yu; Zhang, Bing; Zhang, Shi

    2017-06-02

    Network Lifetime is one of the most important metrics in Wireless Body Area Networks (WBANs). In this paper, a relay selection scheme is proposed under the topology constrains specified in the IEEE 802.15.6 standard to maximize the lifetime of WBANs through formulating and solving an optimization problem where relay selection of each node acts as optimization variable. Considering the diversity of the sensor nodes in WBANs, the optimization problem takes not only energy consumption rate but also energy difference among sensor nodes into account to improve the network lifetime performance. Since it is Non-deterministic Polynomial-hard (NP-hard) and intractable, a heuristic solution is then designed to rapidly address the optimization. The simulation results indicate that the proposed relay selection scheme has better performance in network lifetime compared with existing algorithms and that the heuristic solution has low time complexity with only a negligible performance degradation gap from optimal value. Furthermore, we also conduct simulations based on a general WBAN model to comprehensively illustrate the advantages of the proposed algorithm. At the end of the evaluation, we validate the feasibility of our proposed scheme via an implementation discussion.

  14. A Numerical Model for Predicting Shoreline Changes.

    DTIC Science & Technology

    1980-07-01

    minimal shorelines for finite - difference scheme of time lAt (B) . . . 27 11 Transport function Q(ao) = cos ao sin za o for selected values of z . 28 12...generate the preceding examples was based on the use of implicit finite differences . Such schemes, whether implicit or ex- plicit (or both), are...10(A) shows an initially straight shoreline. In any finite - difference scheme, after one time increment At, the shoreline is bounded below by the solid

  15. High-order asynchrony-tolerant finite difference schemes for partial differential equations

    NASA Astrophysics Data System (ADS)

    Aditya, Konduri; Donzis, Diego A.

    2017-12-01

    Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.

  16. Comparison of two integration methods for dynamic causal modeling of electrophysiological data.

    PubMed

    Lemaréchal, Jean-Didier; George, Nathalie; David, Olivier

    2018-06-01

    Dynamic causal modeling (DCM) is a methodological approach to study effective connectivity among brain regions. Based on a set of observations and a biophysical model of brain interactions, DCM uses a Bayesian framework to estimate the posterior distribution of the free parameters of the model (e.g. modulation of connectivity) and infer architectural properties of the most plausible model (i.e. model selection). When modeling electrophysiological event-related responses, the estimation of the model relies on the integration of the system of delay differential equations (DDEs) that describe the dynamics of the system. In this technical note, we compared two numerical schemes for the integration of DDEs. The first, and standard, scheme approximates the DDEs (more precisely, the state of the system, with respect to conduction delays among brain regions) using ordinary differential equations (ODEs) and solves it with a fixed step size. The second scheme uses a dedicated DDEs solver with adaptive step sizes to control error, making it theoretically more accurate. To highlight the effects of the approximation used by the first integration scheme in regard to parameter estimation and Bayesian model selection, we performed simulations of local field potentials using first, a simple model comprising 2 regions and second, a more complex model comprising 6 regions. In these simulations, the second integration scheme served as the standard to which the first one was compared. Then, the performances of the two integration schemes were directly compared by fitting a public mismatch negativity EEG dataset with different models. The simulations revealed that the use of the standard DCM integration scheme was acceptable for Bayesian model selection but underestimated the connectivity parameters and did not allow an accurate estimation of conduction delays. Fitting to empirical data showed that the models systematically obtained an increased accuracy when using the second integration scheme. We conclude that inference on connectivity strength and delay based on DCM for EEG/MEG requires an accurate integration scheme. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  17. A new approach to develop computer-aided detection schemes of digital mammograms

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Qian, Wei; Pu, Jiantao; Liu, Hong; Zheng, Bin

    2015-06-01

    The purpose of this study is to develop a new global mammographic image feature analysis based computer-aided detection (CAD) scheme and evaluate its performance in detecting positive screening mammography examinations. A dataset that includes images acquired from 1896 full-field digital mammography (FFDM) screening examinations was used in this study. Among them, 812 cases were positive for cancer and 1084 were negative or benign. After segmenting the breast area, a computerized scheme was applied to compute 92 global mammographic tissue density based features on each of four mammograms of the craniocaudal (CC) and mediolateral oblique (MLO) views. After adding three existing popular risk factors (woman’s age, subjectively rated mammographic density, and family breast cancer history) into the initial feature pool, we applied a sequential forward floating selection feature selection algorithm to select relevant features from the bilateral CC and MLO view images separately. The selected CC and MLO view image features were used to train two artificial neural networks (ANNs). The results were then fused by a third ANN to build a two-stage classifier to predict the likelihood of the FFDM screening examination being positive. CAD performance was tested using a ten-fold cross-validation method. The computed area under the receiver operating characteristic curve was AUC = 0.779   ±   0.025 and the odds ratio monotonically increased from 1 to 31.55 as CAD-generated detection scores increased. The study demonstrated that this new global image feature based CAD scheme had a relatively higher discriminatory power to cue the FFDM examinations with high risk of being positive, which may provide a new CAD-cueing method to assist radiologists in reading and interpreting screening mammograms.

  18. A Comprehensive Study of Data Collection Schemes Using Mobile Sinks in Wireless Sensor Networks

    PubMed Central

    Khan, Abdul Waheed; Abdullah, Abdul Hanan; Anisi, Mohammad Hossein; Bangash, Javed Iqbal

    2014-01-01

    Recently sink mobility has been exploited in numerous schemes to prolong the lifetime of wireless sensor networks (WSNs). Contrary to traditional WSNs where sensory data from sensor field is ultimately sent to a static sink, mobile sink-based approaches alleviate energy-holes issues thereby facilitating balanced energy consumption among nodes. In mobility scenarios, nodes need to keep track of the latest location of mobile sinks for data delivery. However, frequent propagation of sink topological updates undermines the energy conservation goal and therefore should be controlled. Furthermore, controlled propagation of sinks' topological updates affects the performance of routing strategies thereby increasing data delivery latency and reducing packet delivery ratios. This paper presents a taxonomy of various data collection/dissemination schemes that exploit sink mobility. Based on how sink mobility is exploited in the sensor field, we classify existing schemes into three classes, namely path constrained, path unconstrained, and controlled sink mobility-based schemes. We also organize existing schemes based on their primary goals and provide a comparative study to aid readers in selecting the appropriate scheme in accordance with their particular intended applications and network dynamics. Finally, we conclude our discussion with the identification of some unresolved issues in pursuit of data delivery to a mobile sink. PMID:24504107

  19. Can paying for results help to achieve the Millennium Development Goals? A critical review of selected evaluations of results-based financing.

    PubMed

    Oxman, Andrew D; Fretheim, Atle

    2009-08-01

    Results-based financing (RBF) refers to the transfer of money or material goods conditional on taking a measurable action or achieving a predetermined performance target. RBF is being promoted for helping to achieve the Millennium Development Goals (MDGs). We undertook a critical appraisal of selected evaluations of RBF schemes in the health sector in low and middle-income countries (LMIC). In addition, key informants were interviewed to identify literature relevant to the use of RBF in the health sector in LMIC, key examples, evaluations, and other key informants. The use of RBF in LMIC has commonly been a part of a package that may include increased funding, technical support, training, changes in management, and new information systems. It is not possible to disentangle the effects of financial incentives as one element of RBF schemes, and there is very limited evidence of RBF per se having an effect. RBF schemes can have unintended effects. When RBF schemes are used, they should be designed carefully, including the level at which they are targeted, the choice of targets and indicators, the type and magnitude of incentives, the proportion of financing that is paid based on results, and the ancillary components of the scheme. For RBF to be effective, it must be part of an appropriate package of interventions, and technical capacity or support must be available. RBF schemes should be monitored for possible unintended effects and evaluated using rigorous study designs. © 2009 Blackwell Publishing Asia Pty Ltd and Chinese Cochrane Center, West China Hospital of Sichuan University.

  20. Numerical solution of modified differential equations based on symmetry preservation.

    PubMed

    Ozbenli, Ersin; Vedula, Prakash

    2017-12-01

    In this paper, we propose a method to construct invariant finite-difference schemes for solution of partial differential equations (PDEs) via consideration of modified forms of the underlying PDEs. The invariant schemes, which preserve Lie symmetries, are obtained based on the method of equivariant moving frames. While it is often difficult to construct invariant numerical schemes for PDEs due to complicated symmetry groups associated with cumbersome discrete variable transformations, we note that symmetries associated with more convenient transformations can often be obtained by appropriately modifying the original PDEs. In some cases, modifications to the original PDEs are also found to be useful in order to avoid trivial solutions that might arise from particular selections of moving frames. In our proposed method, modified forms of PDEs can be obtained either by addition of perturbation terms to the original PDEs or through defect correction procedures. These additional terms, whose primary purpose is to enable symmetries with more convenient transformations, are then removed from the system by considering moving frames for which these specific terms go to zero. Further, we explore selection of appropriate moving frames that result in improvement in accuracy of invariant numerical schemes based on modified PDEs. The proposed method is tested using the linear advection equation (in one- and two-dimensions) and the inviscid Burgers' equation. Results obtained for these tests cases indicate that numerical schemes derived from the proposed method perform significantly better than existing schemes not only by virtue of improvement in numerical accuracy but also due to preservation of qualitative properties or symmetries of the underlying differential equations.

  1. Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks

    PubMed Central

    Robinson, Y. Harold; Rajaram, M.

    2015-01-01

    Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966

  2. A novel lost packets recovery scheme based on visual secret sharing

    NASA Astrophysics Data System (ADS)

    Lu, Kun; Shan, Hong; Li, Zhi; Niu, Zhao

    2017-08-01

    In this paper, a novel lost packets recovery scheme which encrypts the effective parts of an original packet into two shadow packets based on (2, 2)-threshold XOR-based visual Secret Sharing (VSS) is proposed. The two shadow packets used as watermarks would be embedded into two normal data packets with digital watermarking embedding technology and then sent from one sensor node to another. Each shadow packet would reveal no information of the original packet, which can improve the security of original packet delivery greatly. The two shadow packets which can be extracted from the received two normal data packets delivered from a sensor node can recover the original packet lossless based on XOR-based VSS. The Performance analysis present that the proposed scheme provides essential services as long as possible in the presence of selective forwarding attack. The proposed scheme would not increase the amount of additional traffic, namely, lower energy consumption, which is suitable for Wireless Sensor Network (WSN).

  3. Design of a vehicle based system to prevent ozone loss

    NASA Technical Reports Server (NTRS)

    Talbot, Matthew D.; Eby, Steven C.; Ireland, Glen J.; Mcwithey, Michael C.; Schneider, Mark S.; Youngblood, Daniel L.; Johnson, Matt; Taylor, Chris

    1994-01-01

    This project is designed to be completed over a three year period. Overall project goals are: (1) to understand the processes that contribute to stratospheric ozone loss; (2) to determine the best scheme to prevent ozone loss; and (3) to design a vehicle based system to carry out the prevention scheme. The 1993/1994 design objectives included: (1) to review the results of the 1992/1993 design team, including a reevaluation of the key assumptions used; (2) to develop a matrix of baseline vehicle concepts as candidates for the delivery vehicle; and (3) to develop a selection criteria and perform quantitative trade studies to use in the selection of the specific vehicle concept.

  4. Using Classical Reliability Models and Single Event Upset (SEU) Data to Determine Optimum Implementation Schemes for Triple Modular Redundancy (TMR) in SRAM-Based Field Programmable Gate Array (FPGA) Devices

    NASA Technical Reports Server (NTRS)

    Berg, M.; Kim, H.; Phan, A.; Seidleck, C.; LaBel, K.; Pellish, J.; Campola, M.

    2015-01-01

    Space applications are complex systems that require intricate trade analyses for optimum implementations. We focus on a subset of the trade process, using classical reliability theory and SEU data, to illustrate appropriate TMR scheme selection.

  5. Using Nonlinear Stochastic Evolutionary Game Strategy to Model an Evolutionary Biological Network of Organ Carcinogenesis Under a Natural Selection Scheme

    PubMed Central

    Chen, Bor-Sen; Tsai, Kun-Wei; Li, Cheng-Wei

    2015-01-01

    Molecular biologists have long recognized carcinogenesis as an evolutionary process that involves natural selection. Cancer is driven by the somatic evolution of cell lineages. In this study, the evolution of somatic cancer cell lineages during carcinogenesis was modeled as an equilibrium point (ie, phenotype of attractor) shifting, the process of a nonlinear stochastic evolutionary biological network. This process is subject to intrinsic random fluctuations because of somatic genetic and epigenetic variations, as well as extrinsic disturbances because of carcinogens and stressors. In order to maintain the normal function (ie, phenotype) of an evolutionary biological network subjected to random intrinsic fluctuations and extrinsic disturbances, a network robustness scheme that incorporates natural selection needs to be developed. This can be accomplished by selecting certain genetic and epigenetic variations to modify the network structure to attenuate intrinsic fluctuations efficiently and to resist extrinsic disturbances in order to maintain the phenotype of the evolutionary biological network at an equilibrium point (attractor). However, during carcinogenesis, the remaining (or neutral) genetic and epigenetic variations accumulate, and the extrinsic disturbances become too large to maintain the normal phenotype at the desired equilibrium point for the nonlinear evolutionary biological network. Thus, the network is shifted to a cancer phenotype at a new equilibrium point that begins a new evolutionary process. In this study, the natural selection scheme of an evolutionary biological network of carcinogenesis was derived from a robust negative feedback scheme based on the nonlinear stochastic Nash game strategy. The evolvability and phenotypic robustness criteria of the evolutionary cancer network were also estimated by solving a Hamilton–Jacobi inequality – constrained optimization problem. The simulation revealed that the phenotypic shift of the lung cancer-associated cell network takes 54.5 years from a normal state to stage I cancer, 1.5 years from stage I to stage II cancer, and 2.5 years from stage II to stage III cancer, with a reasonable match for the statistical result of the average age of lung cancer. These results suggest that a robust negative feedback scheme, based on a stochastic evolutionary game strategy, plays a critical role in an evolutionary biological network of carcinogenesis under a natural selection scheme. PMID:26244004

  6. Multiplicative noise removal through fractional order tv-based model and fast numerical schemes for its approximation

    NASA Astrophysics Data System (ADS)

    Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad

    2017-07-01

    This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.

  7. Optimized Energy Harvesting, Cluster-Head Selection and Channel Allocation for IoTs in Smart Cities

    PubMed Central

    Aslam, Saleem; Hasan, Najam Ul; Jang, Ju Wook; Lee, Kyung-Geun

    2016-01-01

    This paper highlights three critical aspects of the internet of things (IoTs), namely (1) energy efficiency, (2) energy balancing and (3) quality of service (QoS) and presents three novel schemes for addressing these aspects. For energy efficiency, a novel radio frequency (RF) energy-harvesting scheme is presented in which each IoT device is associated with the best possible RF source in order to maximize the overall energy that the IoT devices harvest. For energy balancing, the IoT devices in close proximity are clustered together and then an IoT device with the highest residual energy is selected as a cluster head (CH) on a rotational basis. Once the CH is selected, it assigns channels to the IoT devices to report their data using a novel integer linear program (ILP)-based channel allocation scheme by satisfying their desired QoS. To evaluate the presented schemes, exhaustive simulations are carried out by varying different parameters, including the number of IoT devices, the number of harvesting sources, the distance between RF sources and IoT devices and the primary user (PU) activity of different channels. The simulation results demonstrate that our proposed schemes perform better than the existing ones. PMID:27918424

  8. Optimized Energy Harvesting, Cluster-Head Selection and Channel Allocation for IoTs in Smart Cities.

    PubMed

    Aslam, Saleem; Hasan, Najam Ul; Jang, Ju Wook; Lee, Kyung-Geun

    2016-12-02

    This paper highlights three critical aspects of the internet of things (IoTs), namely (1) energy efficiency, (2) energy balancing and (3) quality of service (QoS) and presents three novel schemes for addressing these aspects. For energy efficiency, a novel radio frequency (RF) energy-harvesting scheme is presented in which each IoT device is associated with the best possible RF source in order to maximize the overall energy that the IoT devices harvest. For energy balancing, the IoT devices in close proximity are clustered together and then an IoT device with the highest residual energy is selected as a cluster head (CH) on a rotational basis. Once the CH is selected, it assigns channels to the IoT devices to report their data using a novel integer linear program (ILP)-based channel allocation scheme by satisfying their desired QoS. To evaluate the presented schemes, exhaustive simulations are carried out by varying different parameters, including the number of IoT devices, the number of harvesting sources, the distance between RF sources and IoT devices and the primary user (PU) activity of different channels. The simulation results demonstrate that our proposed schemes perform better than the existing ones.

  9. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  10. Portable Nanoparticle-Based Sensors for Food Safety Assessment

    PubMed Central

    Bülbül, Gonca; Hayat, Akhtar; Andreescu, Silvana

    2015-01-01

    The use of nanotechnology-derived products in the development of sensors and analytical measurement methodologies has increased significantly over the past decade. Nano-based sensing approaches include the use of nanoparticles (NPs) and nanostructures to enhance sensitivity and selectivity, design new detection schemes, improve sample preparation and increase portability. This review summarizes recent advancements in the design and development of NP-based sensors for assessing food safety. The most common types of NPs used to fabricate sensors for detection of food contaminants are discussed. Selected examples of NP-based detection schemes with colorimetric and electrochemical detection are provided with focus on sensors for the detection of chemical and biological contaminants including pesticides, heavy metals, bacterial pathogens and natural toxins. Current trends in the development of low-cost portable NP-based technology for rapid assessment of food safety as well as challenges for practical implementation and future research directions are discussed. PMID:26690169

  11. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    PubMed

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  12. Performance Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis with Different Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel

    Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less

  13. Multispectral Image Enhancement Through Adaptive Wavelet Fusion

    DTIC Science & Technology

    2016-09-14

    13. SUPPLEMENTARY NOTES 14. ABSTRACT This research developed a multiresolution image fusion scheme based on guided filtering . Guided filtering can...effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale...details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at

  14. Positive-negative corresponding normalized ghost imaging based on an adaptive threshold

    NASA Astrophysics Data System (ADS)

    Li, G. L.; Zhao, Y.; Yang, Z. H.; Liu, X.

    2016-11-01

    Ghost imaging (GI) technology has attracted increasing attention as a new imaging technique in recent years. However, the signal-to-noise ratio (SNR) of GI with pseudo-thermal light needs to be improved before it meets engineering application demands. We therefore propose a new scheme called positive-negative correspondence normalized GI based on an adaptive threshold (PCNGI-AT) to achieve a good performance with less amount of data. In this work, we use both the advantages of normalized GI (NGI) and positive-negative correspondence GI (P-NCGI). The correctness and feasibility of the scheme were proved in theory before we designed an adaptive threshold selection method, in which the parameter of object signal selection conditions is replaced by the normalizing value. The simulation and experimental results reveal that the SNR of the proposed scheme is better than that of time-correspondence differential GI (TCDGI), avoiding the calculation of the matrix of correlation and reducing the amount of data used. The method proposed will make GI far more practical in engineering applications.

  15. One-third selection scheme for addressing a ferroelectric matrix arrangement

    NASA Technical Reports Server (NTRS)

    Tannas, Jr., Lawrence E. (Inventor)

    1979-01-01

    An improved scheme for selectively addressing a matrix arrangement comprised of ferroelectrics having x and y orthogonally disposed intersecting lines. A one-third selection scheme is utilized that includes normalized selection signals having amplitudes: V.sub.x =0; V.sub.x =2/3; V.sub.y =1/3; and V.sub.y =1, which signals can be applied to the intersection of an x and y-line. The instant selection scheme minimizes both hysteresis creep and the cross-coupling voltage between x and y-lines to prevent undesirable hysteresis switching of the ferroelectric matrix arrangement.

  16. Application of Snyder-Dolan classification scheme to the selection of "orthogonal" columns for fast screening of illicit drugs and impurity profiling of pharmaceuticals--I. Isocratic elution.

    PubMed

    Fan, Wenzhe; Zhang, Yu; Carr, Peter W; Rutan, Sarah C; Dumarey, Melanie; Schellinger, Adam P; Pritts, Wayne

    2009-09-18

    Fourteen judiciously selected reversed phase columns were tested with 18 cationic drug solutes under the isocratic elution conditions advised in the Snyder-Dolan (S-D) hydrophobic subtraction method of column classification. The standard errors (S.E.) of the least squares regressions of logk' vs. logk'(REF) were obtained for a given column against a reference column and used to compare and classify columns based on their selectivity. The results are consistent with those obtained with a study of the 16 test solutes recommended by Snyder and Dolan. To the extent these drugs are representative, these results show that the S-D classification scheme is also generally applicable to pharmaceuticals under isocratic conditions. That is, those columns judged to be similar based on the 16 S-D solutes were similar based on the 18 drugs; furthermore those columns judged to have significantly different selectivities based on the 16 S-D probes appeared to be quite different for the drugs as well. Given that the S-D method has been used to classify more than 400 different types of reversed phases the extension to cationic drugs is a significant finding.

  17. Optimum Adaptive Modulation and Channel Coding Scheme for Frequency Domain Channel-Dependent Scheduling in OFDM Based Evolved UTRA Downlink

    NASA Astrophysics Data System (ADS)

    Miki, Nobuhiko; Kishiyama, Yoshihisa; Higuchi, Kenichi; Sawahashi, Mamoru; Nakagawa, Masao

    In the Evolved UTRA (UMTS Terrestrial Radio Access) downlink, Orthogonal Frequency Division Multiplexing (OFDM) based radio access was adopted because of its inherent immunity to multipath interference and flexible accommodation of different spectrum arrangements. This paper presents the optimum adaptive modulation and channel coding (AMC) scheme when resource blocks (RBs) is simultaneously assigned to the same user when frequency and time domain channel-dependent scheduling is assumed in the downlink OFDMA radio access with single-antenna transmission. We start by presenting selection methods for the modulation and coding scheme (MCS) employing mutual information both for RB-common and RB-dependent modulation schemes. Simulation results show that, irrespective of the application of power adaptation to RB-dependent modulation, the improvement in the achievable throughput of the RB-dependent modulation scheme compared to that for the RB-common modulation scheme is slight, i.e., 4 to 5%. In addition, the number of required control signaling bits in the RB-dependent modulation scheme becomes greater than that for the RB-common modulation scheme. Therefore, we conclude that the RB-common modulation and channel coding rate scheme is preferred, when multiple RBs of the same coded stream are assigned to one user in the case of single-antenna transmission.

  18. Distributed Fair Auto Rate Medium Access Control for IEEE 802.11 Based WLANs

    NASA Astrophysics Data System (ADS)

    Zhu, Yanfeng; Niu, Zhisheng

    Much research has shown that a carefully designed auto rate medium access control can utilize the underlying physical multi-rate capability to exploit the time-variation of the channel. In this paper, we develop a simple analytical model to elucidate the rule that maximizes the throughput of RTS/CTS based multi-rate wireless local area networks. Based on the discovered rule, we propose two distributed fair auto rate medium access control schemes called FARM and FARM+ from the view-point of throughput fairness and time-share fairness, respectively. With the proposed schemes, after receiving a RTS frame, the receiver selectively returns the CTS frame to inform the transmitter the maximum feasible rate probed by the signal-to-noise ratio of the received RTS frame. The key feature of the proposed schemes is that they are capable of maintaining throughput/time-share fairness in asymmetric situation where the distribution of SNR varies with stations. Extensive simulation results show that the proposed schemes outperform the existing throughput/time-share fair auto rate schemes in time-varying channel conditions.

  19. Design of a Vehicle Based Intervention System to Prevent Ozone Loss

    NASA Technical Reports Server (NTRS)

    Cole, Robin; Fisher, Daniel; Meade, Matt; Neel, James; Olson, Kristin; Pittman, Andrew; Valdivia, Anne; Wibisono, Aria; Mason, William H.; Kirschbaum, Nathan

    1995-01-01

    This project was designed to be completed over a period of three years. Overall project goals were: (1) To understand the processes that contribute to stratospheric ozone loss; (2) To determine the best prevention scheme for loss; (3) To design a delivery vehicle to accomplish the prevention scheme. The 1994-1995 design objectives included: (1) To review the results of the 1993-1994 design team, including a reevaluation of the major assumptions and criteria selected to choose a vehicle; (2) To evaluate preliminary vehicle concepts and perform quantitative trade studies to select the optimal vehicle concept.

  20. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    PubMed

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  1. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  2. Characterization of palmprints by wavelet signatures via directional context modeling.

    PubMed

    Zhang, Lei; Zhang, David

    2004-06-01

    The palmprint is one of the most reliable physiological characteristics that can be used to distinguish between individuals. Current palmprint-based systems are more user friendly, more cost effective, and require fewer data signatures than traditional fingerprint-based identification systems. The principal lines and wrinkles captured in a low-resolution palmprint image provide more than enough information to uniquely identify an individual. This paper presents a palmprint identification scheme that characterizes a palmprint using a set of statistical signatures. The palmprint is first transformed into the wavelet domain, and the directional context of each wavelet subband is defined and computed in order to collect the predominant coefficients of its principal lines and wrinkles. A set of statistical signatures, which includes gravity center, density, spatial dispersivity and energy, is then defined to characterize the palmprint with the selected directional context values. A classification and identification scheme based on these signatures is subsequently developed. This scheme exploits the features of principal lines and prominent wrinkles sufficiently and achieves satisfactory results. Compared with the line-segments-matching or interesting-points-matching based palmprint verification schemes, the proposed scheme uses a much smaller amount of data signatures. It also provides a convenient classification strategy and more accurate identification.

  3. Meeting stroke survivors' perceived needs: a qualitative study of a community-based exercise and education scheme.

    PubMed

    Reed, Mary; Harrington, Rachel; Duggan, Aine; Wood, Victorine A

    2010-01-01

    A qualitative study using a phenomenological approach, to explore stroke survivors' needs and their perceptions of whether a community stroke scheme met these needs. Semi-structured in-depth interviews of 12 stroke survivors, purposively selected from participants attending a new community stroke scheme. Interpretative phenomenological analysis of interviews by two researchers independently. Participants attending the community stroke scheme sought to reconstruct their lives in the aftermath of their stroke. To enable this they needed internal resources of confidence and sense of purpose to 'create their social self', and external resources of 'responsive services' and an 'informal support network', to provide direction and encouragement. Participants felt the community stroke scheme met some of these needs through exercise, goal setting and peer group interaction, which included social support and knowledge acquisition. Stroke survivors need a variety of internal and external resources so that they can rebuild their lives positively post stroke. A stroke-specific community scheme, based on exercise, life-centred goal setting, peer support and knowledge acquisition, is an external resource that can help with meeting some of the stroke survivor's needs.

  4. Multifocus image fusion scheme based on the multiscale curvature in nonsubsampled contourlet transform domain

    NASA Astrophysics Data System (ADS)

    Li, Xiaosong; Li, Huafeng; Yu, Zhengtao; Kong, Yingchun

    2015-07-01

    An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.

  5. Outage Performance Analysis of Relay Selection Schemes in Wireless Energy Harvesting Cooperative Networks over Non-Identical Rayleigh Fading Channels †

    PubMed Central

    Do, Nhu Tri; Bao, Vo Nguyen Quoc; An, Beongku

    2016-01-01

    In this paper, we study relay selection in decode-and-forward wireless energy harvesting cooperative networks. In contrast to conventional cooperative networks, the relays harvest energy from the source’s radio-frequency radiation and then use that energy to forward the source information. Considering power splitting receiver architecture used at relays to harvest energy, we are concerned with the performance of two popular relay selection schemes, namely, partial relay selection (PRS) scheme and optimal relay selection (ORS) scheme. In particular, we analyze the system performance in terms of outage probability (OP) over independent and non-identical (i.n.i.d.) Rayleigh fading channels. We derive the closed-form approximations for the system outage probabilities of both schemes and validate the analysis by the Monte-Carlo simulation. The numerical results provide comprehensive performance comparison between the PRS and ORS schemes and reveal the effect of wireless energy harvesting on the outage performances of both schemes. Additionally, we also show the advantages and drawbacks of the wireless energy harvesting cooperative networks and compare to the conventional cooperative networks. PMID:26927119

  6. Outage Performance Analysis of Relay Selection Schemes in Wireless Energy Harvesting Cooperative Networks over Non-Identical Rayleigh Fading Channels.

    PubMed

    Do, Nhu Tri; Bao, Vo Nguyen Quoc; An, Beongku

    2016-02-26

    In this paper, we study relay selection in decode-and-forward wireless energy harvesting cooperative networks. In contrast to conventional cooperative networks, the relays harvest energy from the source's radio-frequency radiation and then use that energy to forward the source information. Considering power splitting receiver architecture used at relays to harvest energy, we are concerned with the performance of two popular relay selection schemes, namely, partial relay selection (PRS) scheme and optimal relay selection (ORS) scheme. In particular, we analyze the system performance in terms of outage probability (OP) over independent and non-identical (i.n.i.d.) Rayleigh fading channels. We derive the closed-form approximations for the system outage probabilities of both schemes and validate the analysis by the Monte-Carlo simulation. The numerical results provide comprehensive performance comparison between the PRS and ORS schemes and reveal the effect of wireless energy harvesting on the outage performances of both schemes. Additionally, we also show the advantages and drawbacks of the wireless energy harvesting cooperative networks and compare to the conventional cooperative networks.

  7. Multilevel Green's function interpolation method for scattering from composite metallic and dielectric objects.

    PubMed

    Shi, Yan; Wang, Hao Gang; Li, Long; Chan, Chi Hou

    2008-10-01

    A multilevel Green's function interpolation method based on two kinds of multilevel partitioning schemes--the quasi-2D and the hybrid partitioning scheme--is proposed for analyzing electromagnetic scattering from objects comprising both conducting and dielectric parts. The problem is formulated using the surface integral equation for homogeneous dielectric and conducting bodies. A quasi-2D multilevel partitioning scheme is devised to improve the efficiency of the Green's function interpolation. In contrast to previous multilevel partitioning schemes, noncubic groups are introduced to discretize the whole EM structure in this quasi-2D multilevel partitioning scheme. Based on the detailed analysis of the dimension of the group in this partitioning scheme, a hybrid quasi-2D/3D multilevel partitioning scheme is proposed to effectively handle objects with fine local structures. Selection criteria for some key parameters relating to the interpolation technique are given. The proposed algorithm is ideal for the solution of problems involving objects such as missiles, microstrip antenna arrays, photonic bandgap structures, etc. Numerical examples are presented to show that CPU time is between O(N) and O(N log N) while the computer memory requirement is O(N).

  8. A Simple Algebraic Grid Adaptation Scheme with Applications to Two- and Three-dimensional Flow Problems

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.; Lytle, John K.

    1989-01-01

    An algebraic adaptive grid scheme based on the concept of arc equidistribution is presented. The scheme locally adjusts the grid density based on gradients of selected flow variables from either finite difference or finite volume calculations. A user-prescribed grid stretching can be specified such that control of the grid spacing can be maintained in areas of known flowfield behavior. For example, the grid can be clustered near a wall for boundary layer resolution and made coarse near the outer boundary of an external flow. A grid smoothing technique is incorporated into the adaptive grid routine, which is found to be more robust and efficient than the weight function filtering technique employed by other researchers. Since the present algebraic scheme requires no iteration or solution of differential equations, the computer time needed for grid adaptation is trivial, making the scheme useful for three-dimensional flow problems. Applications to two- and three-dimensional flow problems show that a considerable improvement in flowfield resolution can be achieved by using the proposed adaptive grid scheme. Although the scheme was developed with steady flow in mind, it is a good candidate for unsteady flow computations because of its efficiency.

  9. Effects of information, education, and communication campaign on a community-based health insurance scheme in Burkina Faso

    PubMed Central

    Cofie, Patience; De Allegri, Manuela; Kouyaté, Bocar; Sauerborn, Rainer

    2013-01-01

    Objective The study analysed the effect of Information, Education, and Communication (IEC) campaign activities on the adoption of a community-based health insurance (CHI) scheme in Nouna, Burkina Faso. It also identified the factors that enhanced or limited the campaign's effectiveness. Design Complementary data collection approaches were used. A survey was conducted with 250 randomly selected household heads, followed by in-depth interviews with 22 purposively selected community leaders, group discussions with the project management team, and field observations. Bivariate analysis and multivariate logistic regression models were used to assess the association between household exposure to campaign and acquisition of knowledge as well as household exposure to campaign and enrolment. Results The IEC campaign had a positive effect on households’ knowledge about the CHI and to a lesser extent on household enrolment in the scheme. The effectiveness of the IEC strategy was mainly influenced by: (1) frequent and consistent IEC messages from multiple media channels (mass and interpersonal channels), including the radio, a mobile information van, and CHI team, and (2) community heads’ participation in the CHI scheme promotion. Education was the only significantly influential socio-demographic determinant of knowledge and enrolment among household heads. The relatively low effects of the IEC campaign on CHI enrolment are indicative of other important IEC mediating factors, which should be taken into account in future CHI campaign evaluation. Conclusion The study concludes that an IEC campaign is crucial to improving the understanding of the CHI scheme concept, which is an enabler to enrolment, and should be integrated into scheme designs and evaluations. PMID:24314344

  10. Effects of information, education, and communication campaign on a community-based health insurance scheme in Burkina Faso.

    PubMed

    Cofie, Patience; De Allegri, Manuela; Kouyaté, Bocar; Sauerborn, Rainer

    2013-12-06

    The study analysed the effect of Information, Education, and Communication (IEC) campaign activities on the adoption of a community-based health insurance (CHI) scheme in Nouna, Burkina Faso. It also identified the factors that enhanced or limited the campaign's effectiveness. Complementary data collection approaches were used. A survey was conducted with 250 randomly selected household heads, followed by in-depth interviews with 22 purposively selected community leaders, group discussions with the project management team, and field observations. Bivariate analysis and multivariate logistic regression models were used to assess the association between household exposure to campaign and acquisition of knowledge as well as household exposure to campaign and enrolment. The IEC campaign had a positive effect on households' knowledge about the CHI and to a lesser extent on household enrolment in the scheme. The effectiveness of the IEC strategy was mainly influenced by: (1) frequent and consistent IEC messages from multiple media channels (mass and interpersonal channels), including the radio, a mobile information van, and CHI team, and (2) community heads' participation in the CHI scheme promotion. Education was the only significantly influential socio-demographic determinant of knowledge and enrolment among household heads. The relatively low effects of the IEC campaign on CHI enrolment are indicative of other important IEC mediating factors, which should be taken into account in future CHI campaign evaluation. The study concludes that an IEC campaign is crucial to improving the understanding of the CHI scheme concept, which is an enabler to enrolment, and should be integrated into scheme designs and evaluations.

  11. An entropy-variables-based formulation of residual distribution schemes for non-equilibrium flows

    NASA Astrophysics Data System (ADS)

    Garicano-Mena, Jesús; Lani, Andrea; Degrez, Gérard

    2018-06-01

    In this paper we present an extension of Residual Distribution techniques for the simulation of compressible flows in non-equilibrium conditions. The latter are modeled by means of a state-of-the-art multi-species and two-temperature model. An entropy-based variable transformation that symmetrizes the projected advective Jacobian for such a thermophysical model is introduced. Moreover, the transformed advection Jacobian matrix presents a block diagonal structure, with mass-species and electronic-vibrational energy being completely decoupled from the momentum and total energy sub-system. The advantageous structure of the transformed advective Jacobian can be exploited by contour-integration-based Residual Distribution techniques: established schemes that operate on dense matrices can be substituted by the same scheme operating on the momentum-energy subsystem matrix and repeated application of scalar scheme to the mass-species and electronic-vibrational energy terms. Finally, the performance gain of the symmetrizing-variables formulation is quantified on a selection of representative testcases, ranging from subsonic to hypersonic, in inviscid or viscous conditions.

  12. A novel all-optical label processing based on multiple optical orthogonal codes sequences for optical packet switching networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chongfu; Qiu, Kun; Xu, Bo; Ling, Yun

    2008-05-01

    This paper proposes an all-optical label processing scheme that uses the multiple optical orthogonal codes sequences (MOOCS)-based optical label for optical packet switching (OPS) (MOOCS-OPS) networks. In this scheme, each MOOCS is a permutation or combination of the multiple optical orthogonal codes (MOOC) selected from the multiple-groups optical orthogonal codes (MGOOC). Following a comparison of different optical label processing (OLP) schemes, the principles of MOOCS-OPS network are given and analyzed. Firstly, theoretical analyses are used to prove that MOOCS is able to greatly enlarge the number of available optical labels when compared to the previous single optical orthogonal code (SOOC) for OPS (SOOC-OPS) network. Then, the key units of the MOOCS-based optical label packets, including optical packet generation, optical label erasing, optical label extraction and optical label rewriting etc., are given and studied. These results are used to verify that the proposed MOOCS-OPS scheme is feasible.

  13. Paternity testing a non-linkage based marker assisted selection scheme for outbred forage species

    USDA-ARS?s Scientific Manuscript database

    In many major perennial forage species, genomic tools and infrastructure development has advanced enough that their utilization in marker assisted selection (MAS) can be cheaply explored. This paper presents a paternity testing MAS in diploid red clover (Trifolium pratense L.). Utilizing individual ...

  14. DOW-PR DOlphin and Whale Pods Routing Protocol for Underwater Wireless Sensor Networks (UWSNs).

    PubMed

    Wadud, Zahid; Ullah, Khadem; Hussain, Sajjad; Yang, Xiaodong; Qazi, Abdul Baseer

    2018-05-12

    Underwater Wireless Sensor Networks (UWSNs) have intrinsic challenges that include long propagation delays, high mobility of sensor nodes due to water currents, Doppler spread, delay variance, multipath, attenuation and geometric spreading. The existing Weighting Depth and Forwarding Area Division Depth Based Routing (WDFAD-DBR) protocol considers the weighting depth of the two hops in order to select the next Potential Forwarding Node (PFN). To improve the performance of WDFAD-DBR, we propose DOlphin and Whale Pod Routing protocol (DOW-PR). In this scheme, we divide the transmission range into a number of transmission power levels and at the same time select the next PFNs from forwarding and suppressed zones. In contrast to WDFAD-DBR, our scheme not only considers the packet upward advancement, but also takes into account the number of suppressed nodes and number of PFNs at the first and second hops. Consequently, reasonable energy reduction is observed while receiving and transmitting packets. Moreover, our scheme also considers the hops count of the PFNs from the sink. In the absence of PFNs, the proposed scheme will select the node from the suppressed region for broadcasting and thus ensures minimum loss of data. Besides this, we also propose another routing scheme (whale pod) in which multiple sinks are placed at water surface, but one sink is embedded inside the water and is physically connected with the surface sink through high bandwidth connection. Simulation results show that the proposed scheme has high Packet Delivery Ratio (PDR), low energy tax, reduced Accumulated Propagation Distance (APD) and increased the network lifetime.

  15. DOW-PR DOlphin and Whale Pods Routing Protocol for Underwater Wireless Sensor Networks (UWSNs)

    PubMed Central

    Wadud, Zahid; Ullah, Khadem; Hussain, Sajjad; Yang, Xiaodong; Qazi, Abdul Baseer

    2018-01-01

    Underwater Wireless Sensor Networks (UWSNs) have intrinsic challenges that include long propagation delays, high mobility of sensor nodes due to water currents, Doppler spread, delay variance, multipath, attenuation and geometric spreading. The existing Weighting Depth and Forwarding Area Division Depth Based Routing (WDFAD-DBR) protocol considers the weighting depth of the two hops in order to select the next Potential Forwarding Node (PFN). To improve the performance of WDFAD-DBR, we propose DOlphin and Whale Pod Routing protocol (DOW-PR). In this scheme, we divide the transmission range into a number of transmission power levels and at the same time select the next PFNs from forwarding and suppressed zones. In contrast to WDFAD-DBR, our scheme not only considers the packet upward advancement, but also takes into account the number of suppressed nodes and number of PFNs at the first and second hops. Consequently, reasonable energy reduction is observed while receiving and transmitting packets. Moreover, our scheme also considers the hops count of the PFNs from the sink. In the absence of PFNs, the proposed scheme will select the node from the suppressed region for broadcasting and thus ensures minimum loss of data. Besides this, we also propose another routing scheme (whale pod) in which multiple sinks are placed at water surface, but one sink is embedded inside the water and is physically connected with the surface sink through high bandwidth connection. Simulation results show that the proposed scheme has high Packet Delivery Ratio (PDR), low energy tax, reduced Accumulated Propagation Distance (APD) and increased the network lifetime. PMID:29757208

  16. Dynamic Supersonic Base Store Ejection Simulation Using Beggar

    DTIC Science & Technology

    2008-12-01

    selected convergence tolerance. Beggar accomplishes this is by using the symmetric Gauss - Seidel relaxation scheme implemented as follows [26]: [ ln+1,m...scheme (Section 2.3.3). To compute a time accurate solution to an unsteady flow problem, Beggar ap- plies Newtons Method to Eq. 2.15. The full method ...3.6. Separation Distance (x/D) . . . . . . . . . . . . . . . . . . . . 46 4.1. Drag Coefficient of Static Solutions Compared to Dynamic Solu- tions

  17. A new scheme for velocity analysis and imaging of diffractions

    NASA Astrophysics Data System (ADS)

    Lin, Peng; Peng, Suping; Zhao, Jingtao; Cui, Xiaoqin; Du, Wenfeng

    2018-06-01

    Seismic diffractions are the responses of small-scale inhomogeneities or discontinuous geological features, which play a vital role in the exploitation and development of oil and gas reservoirs. However, diffractions are generally ignored and considered as interference noise in conventional data processing. In this paper, a new scheme for velocity analysis and imaging of seismic diffractions is proposed. Two steps compose of this scheme in our application. First, the plane-wave destruction method is used to separate diffractions from specular reflections in the prestack domain. Second, in order to accurately estimate migration velocity of the diffractions, the time-domain dip-angle gathers are derived from a Kirchhoff-based angle prestack time migration using separated diffractions. Diffraction events appear flat in the dip-angle gathers when imaged above the diffraction point with selected accurate migration velocity for diffractions. The selected migration velocity helps to produce the desired prestack imaging of diffractions. Synthetic and field examples are applied to test the validity of the new scheme. The diffraction imaging results indicate that the proposed scheme for velocity analysis and imaging of diffractions can provide more detailed information about small-scale geologic features for seismic interpretation.

  18. Predominant-period site classification for response spectra prediction equations in Italy

    USGS Publications Warehouse

    Di Alessandro, Carola; Bonilla, Luis Fabian; Boore, David M.; Rovelli, Antonio; Scotti, Oona

    2012-01-01

    We propose a site‐classification scheme based on the predominant period of the site, as determined from the average horizontal‐to‐vertical (H/V) spectral ratios of ground motion. Our scheme extends Zhao et al. (2006) classifications by adding two classes, the most important of which is defined by flat H/V ratios with amplitudes less than 2. The proposed classification is investigated by using 5%‐damped response spectra from Italian earthquake records. We select a dataset of 602 three‐component analog and digital recordings from 120 earthquakes recorded at 214 seismic stations within a hypocentral distance of 200 km. Selected events are in the moment‐magnitude range 4.0≤Mw≤6.8 and focal depths from a few kilometers to 46 km. We computed H/V ratios for these data and used them to classify each site into one of six classes. We then investigate the impact of this classification scheme on empirical ground‐motion prediction equations (GMPEs) by comparing its performance with that of the conventional rock/soil classification. Although the adopted approach results in only a small reduction of the overall standard deviation, the use of H/V spectral ratios in site classification does capture the signature of sites with flat frequency‐response, as well as deep and shallow‐soil profiles, characterized by long‐ and short‐period resonance, respectively; in addition, the classification scheme is relatively quick and inexpensive, which is an advantage over schemes based on measurements of shear‐wave velocity.

  19. A combined approach of self-referencing and Principle Component Thermography for transient, steady, and selective heating scenarios

    NASA Astrophysics Data System (ADS)

    Omar, M. A.; Parvataneni, R.; Zhou, Y.

    2010-09-01

    Proposed manuscript describes the implementation of a two step processing procedure, composed of the self-referencing and the Principle Component Thermography (PCT). The combined approach enables the processing of thermograms from transient (flash), steady (halogen) and selective (induction) thermal perturbations. Firstly, the research discusses the three basic processing schemes typically applied for thermography; namely mathematical transformation based processing, curve-fitting processing, and direct contrast based calculations. Proposed algorithm utilizes the self-referencing scheme to create a sub-sequence that contains the maximum contrast information and also compute the anomalies' depth values. While, the Principle Component Thermography operates on the sub-sequence frames by re-arranging its data content (pixel values) spatially and temporally then it highlights the data variance. The PCT is mainly used as a mathematical mean to enhance the defects' contrast thus enabling its shape and size retrieval. The results show that the proposed combined scheme is effective in processing multiple size defects in sandwich steel structure in real-time (<30 Hz) and with full spatial coverage, without the need for a priori defect-free area.

  20. Optimizing phonon space in the phonon-coupling model

    NASA Astrophysics Data System (ADS)

    Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.

    2017-08-01

    We present a new scheme to select the most relevant phonons in the phonon-coupling model, named here the time-blocking approximation (TBA). The new criterion, based on the phonon-nucleon coupling strengths rather than on B (E L ) values, is more selective and thus produces much smaller phonon spaces in the TBA. This is beneficial in two respects: first, it curbs the computational cost, and second, it reduces the danger of double counting in the expansion basis of the TBA. We use here the TBA in a form where the coupling strength is regularized to keep the given Hartree-Fock ground state stable. The scheme is implemented in a random-phase approximation and TBA code based on the Skyrme energy functional. We first explore carefully the cutoff dependence with the new criterion and can work out a natural (optimal) cutoff parameter. Then we use the freshly developed and tested scheme for a survey of giant resonances and low-lying collective states in six doubly magic nuclei looking also at the dependence of the results when varying the Skyrme parametrization.

  1. Application of Snyder-Dolan Classification Scheme to the Selection of “Orthogonal” Columns for Fast Screening for Illicit Drugs and Impurity Profiling of Pharmaceuticals - I. Isocratic Elution

    PubMed Central

    Fan, Wenzhe; Zhang, Yu; Carr, Peter W.; Rutan, Sarah C.; Dumarey, Melanie; Schellinger, Adam P.; Pritts, Wayne

    2011-01-01

    Fourteen judiciously selected reversed-phase columns were tested with 18 cationic drug solutes under the isocratic elution conditions advised in the Snyder-Dolan (S-D) hydrophobic subtraction method of column classification. The standard errors (S.E.) of the least squares regressions of log k′ vs. log k′REF were obtained for a given column against a reference column and used to compare and classify columns based on their selectivity. The results are consistent with those obtained with a study of the 16 test solutes recommended by Snyder and Dolan. To the extent that these drugs are representative these results show that the S-D classification scheme is also generally applicable to pharmaceuticals under isocratic conditions. That is, those columns judged to be similar based on the S-D 16 solutes were similar based on the 18 drugs; furthermore those columns judged to have significantly different selectivities based on the 16 S-D probes appeared to be quite different for the drugs as well. Given that the S-D method has been used to classify more than 400 different types of reversed phases the extension to cationic drugs is a significant finding. PMID:19698948

  2. Selection with inbreeding control in simulated young bull schemes for local dairy cattle breeds.

    PubMed

    Gandini, G; Stella, A; Del Corvo, M; Jansen, G B

    2014-03-01

    Local breeds are rarely subject to modern selection techniques; however, selection programs will be required if local breeds are to remain a viable livelihood option for farmers. Selection in small populations needs to take into account accurate inbreeding control. Optimum contribution selection (OCS) is efficient in controlling inbreeding and maximizes genetic gain. The current paper investigates genetic progress in simulated dairy cattle populations from 500 to 6,000 cows undergoing young bull selection schemes with OCS compared with truncation selection (TS) at an annual inbreeding rate of 0.003. Selection is carried out for a dairy trait with a base heritability of 0.3. A young bull selection scheme was used because of its simplicity in implementation. With TS, annual genetic gain from 0.111 standard deviation units with 500 cows increases rapidly to 0.145 standard deviation units with 4,000 cows. Then, genetic gain increases more slowly up to 6,000 cows. At the same inbreeding rate, OCS produces higher genetic progress than TS. Differences in genetic gain between OCS and TS vary from to 2 to 6.3%. Genetic gain is also improved by increasing the number of years that males can be used as sires of sires. When comparing OCS versus TS at different heritabilities, we observe an advantage of OCS only at high heritability, up to 8% with heritability of 0.9. By increasing the constraint on inbreeding, the difference of genetic gain between the 2 selection methods increases in favor of OCS, and the advantage at the inbreeding rate of 0.001 per generation is 6 times more than at the inbreeding rate of 0.003. Opportunities exist for selection even in dairy cattle populations of a few hundred females. In any case, selection in local breeds will most often require specific investments in infrastructure and manpower, including systems for accurate data recording and selection skills and the presence of artificial insemination and breeders organizations. A cost-benefit analysis is therefore advisable before considering the implementation of selection schemes in local dairy cattle breeds. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Economic evaluation of genomic selection in small ruminants: a sheep meat breeding program.

    PubMed

    Shumbusho, F; Raoul, J; Astruc, J M; Palhiere, I; Lemarié, S; Fugeray-Scarbel, A; Elsen, J M

    2016-06-01

    Recent genomic evaluation studies using real data and predicting genetic gain by modeling breeding programs have reported moderate expected benefits from the replacement of classic selection schemes by genomic selection (GS) in small ruminants. The objectives of this study were to compare the cost, monetary genetic gain and economic efficiency of classic selection and GS schemes in the meat sheep industry. Deterministic methods were used to model selection based on multi-trait indices from a sheep meat breeding program. Decisional variables related to male selection candidates and progeny testing were optimized to maximize the annual monetary genetic gain (AMGG), that is, a weighted sum of meat and maternal traits annual genetic gains. For GS, a reference population of 2000 individuals was assumed and genomic information was available for evaluation of male candidates only. In the classic selection scheme, males breeding values were estimated from own and offspring phenotypes. In GS, different scenarios were considered, differing by the information used to select males (genomic only, genomic+own performance, genomic+offspring phenotypes). The results showed that all GS scenarios were associated with higher total variable costs than classic selection (if the cost of genotyping was 123 euros/animal). In terms of AMGG and economic returns, GS scenarios were found to be superior to classic selection only if genomic information was combined with their own meat phenotypes (GS-Pheno) or with their progeny test information. The predicted economic efficiency, defined as returns (proportional to number of expressions of AMGG in the nucleus and commercial flocks) minus total variable costs, showed that the best GS scenario (GS-Pheno) was up to 15% more efficient than classic selection. For all selection scenarios, optimization increased the overall AMGG, returns and economic efficiency. As a conclusion, our study shows that some forms of GS strategies are more advantageous than classic selection, provided that GS is already initiated (i.e. the initial reference population is available). Optimizing decisional variables of the classic selection scheme could be of greater benefit than including genomic information in optimized designs.

  4. Entropy-based gene ranking without selection bias for the predictive classification of microarray data.

    PubMed

    Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe

    2003-11-06

    We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  5. Evaluation of new collision-pair selection models in DSMC

    NASA Astrophysics Data System (ADS)

    Akhlaghi, Hassan; Roohi, Ehsan

    2017-10-01

    The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.

  6. Laser cooling of molecules by zero-velocity selection and single spontaneous emission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ooi, C. H. Raymond

    2010-11-15

    A laser-cooling scheme for molecules is presented based on repeated cycle of zero-velocity selection, deceleration, and irreversible accumulation. Although this scheme also employs a single spontaneous emission as in [Raymond Ooi, Marzlin, and Audretsch, Eur. Phys. J. D 22, 259 (2003)], in order to circumvent the difficulty of maintaining closed pumping cycles in molecules, there are two distinct features which make the cooling process of this scheme faster and more practical. First, the zero-velocity selection creates a narrow velocity-width population with zero mean velocity, such that no further deceleration (with many stimulated Raman adiabatic passage (STIRAP) pulses) is required. Second,more » only two STIRAP processes are required to decelerate the remaining hot molecular ensemble to create a finite population around zero velocity for the next cycle. We present a setup to realize the cooling process in one dimension with trapping in the other two dimensions using a Stark barrel. Numerical estimates of the cooling parameters and simulations with density matrix equations using OH molecules show the applicability of the cooling scheme. For a gas at temperature T=1 K, the estimated cooling time is only 2 ms, with phase-space density increased by about 30 times. The possibility of extension to three-dimensional cooling via thermalization is also discussed.« less

  7. Modern radiosurgical and endovascular classification schemes for brain arteriovenous malformations.

    PubMed

    Tayebi Meybodi, Ali; Lawton, Michael T

    2018-05-04

    Stereotactic radiosurgery (SRS) and endovascular techniques are commonly used for treating brain arteriovenous malformations (bAVMs). They are usually used as ancillary techniques to microsurgery but may also be used as solitary treatment options. Careful patient selection requires a clear estimate of the treatment efficacy and complication rates for the individual patient. As such, classification schemes are an essential part of patient selection paradigm for each treatment modality. While the Spetzler-Martin grading system and its subsequent modifications are commonly used for microsurgical outcome prediction for bAVMs, the same system(s) may not be easily applicable to SRS and endovascular therapy. Several radiosurgical- and endovascular-based grading scales have been proposed for bAVMs. However, a comprehensive review of these systems including a discussion on their relative advantages and disadvantages is missing. This paper is dedicated to modern classification schemes designed for SRS and endovascular techniques.

  8. The Semantic Management of Environmental Resources within the Interoperable Context of the EuroGEOSS: Alignment of GEMET and the GEOSS SBAs

    NASA Astrophysics Data System (ADS)

    Cialone, Claudia; Stock, Kristin

    2010-05-01

    EuroGEOSS is a European Commission funded project. It aims at improving a scientific understanding of the complex mechanisms which drive changes affecting our planet, identifying and establishing interoperable arrangements between environmental information systems. These systems would be sustained and operated by organizations with a clear mandate and resources and rendered available following the specifications of already existent frameworks such as GEOSS (the Global Earth Observation System of systems)1 and INSPIRE (the Infrastructure for Spatial Information in the European Community)2. The EuroGEOSS project's infrastructure focuses on three thematic areas: forestry, drought and biodiversity. One of the important activities in the project is the retrieval, parsing and harmonization of the large amount of heterogeneous environmental data available at local, regional and global levels between these strategic areas. The challenge is to render it semantically and technically interoperable in a simple way. An initial step in achieving this semantic and technical interoperability involves the selection of appropriate classification schemes (for example, thesauri, ontologies and controlled vocabularies) to describe the resources in the EuroGEOSS framework. These classifications become a crucial part of the interoperable framework scaffolding because they allow data providers to describe their resources and thus support resource discovery, execution and orchestration of varying levels of complexity. However, at present, given the diverse range of environmental thesauri, controlled vocabularies and ontologies and the large number of resources provided by project participants, the selection of appropriate classification schemes involves a number of considerations. First of all, there is the semantic difficulty of selecting classification schemes that contain concepts that are relevant to each thematic area. Secondly, EuroGEOSS is intended to accommodate a number of existing environmental projects (for example, GEOSS and INSPIRE). This requirement imposes constraints on the selection. Thirdly, the selected classification scheme or group of schemes (if more than one) must be capable of alignment (establishing different kinds of mappings between concepts, hence preserving intact the original knowledge schemes) or merging (the creation of another unique ontology from the original ontological sources) (Pérez-Gómez et al., 2004). Last but not least, there is the issue of including multi-lingual schemes that are based on free, open standards (non-proprietary). Using these selection criteria, we aim to support open and convenient data discovery and exchange for users who speak different languages (particularly the European ones for the broad scopes of EuroGEOSS). In order to support the project, we have developed a solution that employs two classification schemes: the Societal Benefit Areas (SBAs)3: the upper-level environmental categorization developed for the GEOSS project and the GEneral Multilingual Environmental Thesaurus (GEMET)4: a general environmental thesaurus whose conceptual structure has already been integrated with the spatial data themes proposed by the INSPIRE project. The former seems to provide the spatial data keywords relevant to the INSPIRE's Directive (JRC, 2008). In this way, we provide users with a basic set of concepts to support resource description and discovery in the thematic areas while supporting the requirements of INSPIRE and GEOSS. Furthermore, the use of only two classification schemes together with the fact that the SBAs are very general categories while GEMET includes much more detailed, yet still top-level, concepts, makes alignment an achievable task. Alignment was selected over merging because it leaves the existing classification schemes intact and requires only a simple activity of defining mappings from GEMET to the SBAs. In order to accomplish this task we are developing a simple, automated, open-source application to assist thematic experts in defining the mappings between concepts in the two classification schemes. The application will then generate SKOS mappings (exactMatch, closeMatch, broadMatch, narrowMatch, relatedMatch) based on thematic expert selections between the concepts in GEMET with the SBAs (including both the general Societal Benefit Areas and their subcategories). Once these mappings are defined and the SKOS files generated, resource providers will be able to select concepts from either GEMET or the SBAs (or a mixture) to describe their resources, and discovery approaches will support selection of concepts from either classification scheme, also returning results classified using the other scheme. While the focus of our work has been on the SBAs and GEMET, we also plan to provide a method for resource providers to further extend the semantic infrastructure by defining alignments to new classification schemes if these are required to support particular specialized thematic areas that are not covered by GEMET. In this way, the approach is flexible and suited to the general scope of EuroGEOSS, allowing specialists to increase at will the level of semantic quality and specificity of data to the initial infrastructural skeleton of the project. References ____________________________________________ Joint research Centre (JRC), 2008. INSPIRE Metadata Editor User Guide Pérez-Gómez A., Fernandez-Lopez M., Corcho O. Ontological engineering: With Examples from the Areas of Knowledge Management, e-Commerce and the Semantic Web.Spinger: London, 2004

  9. Modeling and Analysis of Energy Conservation Scheme Based on Duty Cycling in Wireless Ad Hoc Sensor Network

    PubMed Central

    Chung, Yun Won; Hwang, Ho Young

    2010-01-01

    In sensor network, energy conservation is one of the most critical issues since sensor nodes should perform a sensing task for a long time (e.g., lasting a few years) but the battery of them cannot be replaced in most practical situations. For this purpose, numerous energy conservation schemes have been proposed and duty cycling scheme is considered the most suitable power conservation technique, where sensor nodes alternate between states having different levels of power consumption. In order to analyze the energy consumption of energy conservation scheme based on duty cycling, it is essential to obtain the probability of each state. In this paper, we analytically derive steady state probability of sensor node states, i.e., sleep, listen, and active states, based on traffic characteristics and timer values, i.e., sleep timer, listen timer, and active timer. The effect of traffic characteristics and timer values on the steady state probability and energy consumption is analyzed in detail. Our work can provide sensor network operators guideline for selecting appropriate timer values for efficient energy conservation. The analytical methodology developed in this paper can be extended to other energy conservation schemes based on duty cycling with different sensor node states, without much difficulty. PMID:22219676

  10. Scheme for the selection of measurement uncertainty models in blood establishments' screening immunoassays.

    PubMed

    Pereira, Paulo; Westgard, James O; Encarnação, Pedro; Seghatchian, Jerard; de Sousa, Gracinda

    2015-02-01

    Blood establishments routinely perform screening immunoassays to assess safety of the blood components. As with any other screening test, results have an inherent uncertainty. In blood establishments the major concern is the chance of false negatives, due to its possible impact on patients' health. This article briefly reviews GUM and diagnostic accuracy models for screening immunoassays, recommending a scheme to support the screening laboratories' staffs on the selection of a model considering the intended use of the screening results (i.e., post-transfusion safety). The discussion is grounded on a "risk-based thinking", risk being considered from the blood donor selection to the screening immunoassays. A combination of GUM and diagnostic accuracy models to evaluate measurement uncertainty in blood establishments is recommended. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Energy efficient strategy for throughput improvement in wireless sensor networks.

    PubMed

    Jabbar, Sohail; Minhas, Abid Ali; Imran, Muhammad; Khalid, Shehzad; Saleem, Kashif

    2015-01-23

    Network lifetime and throughput are one of the prime concerns while designing routing protocols for wireless sensor networks (WSNs). However, most of the existing schemes are either geared towards prolonging network lifetime or improving throughput. This paper presents an energy efficient routing scheme for throughput improvement in WSN. The proposed scheme exploits multilayer cluster design for energy efficient forwarding node selection, cluster heads rotation and both inter- and intra-cluster routing. To improve throughput, we rotate the role of cluster head among various nodes based on two threshold levels which reduces the number of dropped packets. We conducted simulations in the NS2 simulator to validate the performance of the proposed scheme. Simulation results demonstrate the performance efficiency of the proposed scheme in terms of various metrics compared to similar approaches published in the literature.

  12. Energy Efficient Strategy for Throughput Improvement in Wireless Sensor Networks

    PubMed Central

    Jabbar, Sohail; Minhas, Abid Ali; Imran, Muhammad; Khalid, Shehzad; Saleem, Kashif

    2015-01-01

    Network lifetime and throughput are one of the prime concerns while designing routing protocols for wireless sensor networks (WSNs). However, most of the existing schemes are either geared towards prolonging network lifetime or improving throughput. This paper presents an energy efficient routing scheme for throughput improvement in WSN. The proposed scheme exploits multilayer cluster design for energy efficient forwarding node selection, cluster heads rotation and both inter- and intra-cluster routing. To improve throughput, we rotate the role of cluster head among various nodes based on two threshold levels which reduces the number of dropped packets. We conducted simulations in the NS2 simulator to validate the performance of the proposed scheme. Simulation results demonstrate the performance efficiency of the proposed scheme in terms of various metrics compared to similar approaches published in the literature. PMID:25625902

  13. Collision Resolution Scheme with Offset for Improved Performance of Heterogeneous WLAN

    NASA Astrophysics Data System (ADS)

    Upadhyay, Raksha; Vyavahare, Prakash D.; Tokekar, Sanjiv

    2016-03-01

    CSMA/CA based DCF of 802.11 MAC layer employs best effort delivery model, in which all stations compete for channel access with same priority. Heterogeneous conditions result in unfairness among stations and degradation in throughput, therefore, providing different priorities to different applications for required quality of service in heterogeneous networks is challenging task. This paper proposes a collision resolution scheme with a novel concept of introducing offset, which is suitable for heterogeneous networks. Selection of random value by a station for its contention with offset results in reduced probability of collision. Expression for the optimum value of the offset is also derived. Results show that proposed scheme, when applied to heterogeneous networks, has improved throughput and fairness than conventional scheme. Results show that proposed scheme also exhibits higher throughput and fairness with reduced delay in homogeneous networks.

  14. Cross-layer Joint Relay Selection and Power Allocation Scheme for Cooperative Relaying System

    NASA Astrophysics Data System (ADS)

    Zhi, Hui; He, Mengmeng; Wang, Feiyue; Huang, Ziju

    2018-03-01

    A novel cross-layer joint relay selection and power allocation (CL-JRSPA) scheme over physical layer and data-link layer is proposed for cooperative relaying system in this paper. Our goal is finding the optimal relay selection and power allocation scheme to maximize system achievable rate when satisfying total transmit power constraint in physical layer and statistical delay quality-of-service (QoS) demand in data-link layer. Using the concept of effective capacity (EC), our goal can be formulated into an optimal joint relay selection and power allocation (JRSPA) problem to maximize the EC when satisfying total transmit power limitation. We first solving optimal power allocation (PA) problem with Lagrange multiplier approach, and then solving optimal relay selection (RS) problem. Simulation results demonstrate that CL-JRSPA scheme gets larger EC than other schemes when satisfying delay QoS demand. In addition, the proposed CL-JRSPA scheme achieves the maximal EC when relay located approximately halfway between source and destination, and EC becomes smaller when the QoS exponent becomes larger.

  15. Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces

    PubMed Central

    Hochberg, Leigh R.; Donoghue, John P.; Brown, Emery N.

    2015-01-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID:25265627

  16. Collaborative Protection and Control Schemes for Shipboard Electrical Systems

    DTIC Science & Technology

    2007-03-26

    VSCs ) for fault current limiting and interruption. Revisions needed on the VSCs to perform these functions have been identified, and feasibility of this...disturbances very fast - less than 3-4 ms [3]. Next section summarizes the details of the agent based protection scheme that uses the VSC as the...fault currents. In our previous work [2, 3], it has been demonstrated that this new functionally for VSC can be achieved by proper selection of

  17. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning

    NASA Astrophysics Data System (ADS)

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-08-01

    {{\\ell }2,1} -minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the {{\\ell }2,1} -based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the {{\\ell }2,1} -minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the {{\\ell }2,1} -minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the {{\\ell }2,1} -minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  18. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning.

    PubMed

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-07-20

    [Formula: see text]-minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the [Formula: see text]-based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the [Formula: see text]-minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the [Formula: see text]-minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the [Formula: see text]-minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  19. Mitigation of inbreeding while preserving genetic gain in genomic breeding programs for outbred plants.

    PubMed

    Lin, Zibei; Shi, Fan; Hayes, Ben J; Daetwyler, Hans D

    2017-05-01

    Heuristic genomic inbreeding controls reduce inbreeding in genomic breeding schemes without reducing genetic gain. Genomic selection is increasingly being implemented in plant breeding programs to accelerate genetic gain of economically important traits. However, it may cause significant loss of genetic diversity when compared with traditional schemes using phenotypic selection. We propose heuristic strategies to control the rate of inbreeding in outbred plants, which can be categorised into three types: controls during mate allocation, during selection, and simultaneous selection and mate allocation. The proposed mate allocation measure GminF allocates two or more parents for mating in mating groups that minimise coancestry using a genomic relationship matrix. Two types of relationship-adjusted genomic breeding values for parent selection candidates ([Formula: see text]) and potential offspring ([Formula: see text]) are devised to control inbreeding during selection and even enabling simultaneous selection and mate allocation. These strategies were tested in a case study using a simulated perennial ryegrass breeding scheme. As compared to the genomic selection scheme without controls, all proposed strategies could significantly decrease inbreeding while achieving comparable genetic gain. In particular, the scenario using [Formula: see text] in simultaneous selection and mate allocation reduced inbreeding to one-third of the original genomic selection scheme. The proposed strategies are readily applicable in any outbred plant breeding program.

  20. Medical X-ray Image Hierarchical Classification Using a Merging and Splitting Scheme in Feature Space.

    PubMed

    Fesharaki, Nooshin Jafari; Pourghassem, Hossein

    2013-07-01

    Due to the daily mass production and the widespread variation of medical X-ray images, it is necessary to classify these for searching and retrieving proposes, especially for content-based medical image retrieval systems. In this paper, a medical X-ray image hierarchical classification structure based on a novel merging and splitting scheme and using shape and texture features is proposed. In the first level of the proposed structure, to improve the classification performance, similar classes with regard to shape contents are grouped based on merging measures and shape features into the general overlapped classes. In the next levels of this structure, the overlapped classes split in smaller classes based on the classification performance of combination of shape and texture features or texture features only. Ultimately, in the last levels, this procedure is also continued forming all the classes, separately. Moreover, to optimize the feature vector in the proposed structure, we use orthogonal forward selection algorithm according to Mahalanobis class separability measure as a feature selection and reduction algorithm. In other words, according to the complexity and inter-class distance of each class, a sub-space of the feature space is selected in each level and then a supervised merging and splitting scheme is applied to form the hierarchical classification. The proposed structure is evaluated on a database consisting of 2158 medical X-ray images of 18 classes (IMAGECLEF 2005 database) and accuracy rate of 93.6% in the last level of the hierarchical structure for an 18-class classification problem is obtained.

  1. Performance evaluation of dispersion parameterization schemes in the plume simulation of FFT-07 diffusion experiment

    NASA Astrophysics Data System (ADS)

    Pandey, Gavendra; Sharan, Maithili

    2018-01-01

    Application of atmospheric dispersion models in air quality analysis requires a proper representation of the vertical and horizontal growth of the plume. For this purpose, various schemes for the parameterization of dispersion parameters σ‧s are described in both stable and unstable conditions. These schemes differ on the use of (i) extent of availability of on-site measurements (ii) formulations developed for other sites and (iii) empirical relations. The performance of these schemes is evaluated in an earlier developed IIT (Indian Institute of Technology) dispersion model with the data set in single and multiple releases conducted at Fusion Field Trials, Dugway Proving Ground, Utah 2007. Qualitative and quantitative evaluation of the relative performance of all the schemes is carried out in both stable and unstable conditions in the light of (i) peak/maximum concentrations, and (ii) overall concentration distribution. The blocked bootstrap resampling technique is adopted to investigate the statistical significance of the differences in performances of each of the schemes by computing 95% confidence limits on the parameters FB and NMSE. The various analysis based on some selected statistical measures indicated consistency in the qualitative and quantitative performances of σ schemes. The scheme which is based on standard deviation of wind velocity fluctuations and Lagrangian time scales exhibits a relatively better performance in predicting the peak as well as the lateral spread.

  2. Tegotae-based decentralised control scheme for autonomous gait transition of snake-like robots.

    PubMed

    Kano, Takeshi; Yoshizawa, Ryo; Ishiguro, Akio

    2017-08-04

    Snakes change their locomotion patterns in response to the environment. This ability is a motivation for developing snake-like robots with highly adaptive functionality. In this study, a decentralised control scheme of snake-like robots that exhibited autonomous gait transition (i.e. the transition between concertina locomotion in narrow aisles and scaffold-based locomotion on unstructured terrains) was developed. Additionally, the control scheme was validated via simulations. A key insight revealed is that these locomotion patterns were not preprogrammed but emerged by exploiting Tegotae, a concept that describes the extent to which a perceived reaction matches a generated action. Unlike local reflexive mechanisms proposed previously, the Tegotae-based feedback mechanism enabled the robot to 'selectively' exploit environments beneficial for propulsion, and generated reasonable locomotion patterns. It is expected that the results of this study can form the basis to design robots that can work under unpredictable and unstructured environments.

  3. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  4. [Study on optimal model of hypothetical work injury insurance scheme].

    PubMed

    Ye, Chi-yu; Dong, Heng-jin; Wu, Yuan; Duan, Sheng-nan; Liu, Xiao-fang; You, Hua; Hu, Hui-mei; Wang, Lin-hao; Zhang, Xing; Wang, Jing

    2013-12-01

    To explore an optimal model of hypothetical work injury insurance scheme, which is in line with the wishes of workers, based on the problems in the implementation of work injury insurance in China and to provide useful information for relevant policy makers. Multistage cluster sampling was used to select subjects: first, 9 small, medium, and large enterprises were selected from three cities (counties) in Zhejiang Province, China according to the economic development, transportation, and cooperation; then, 31 workshops were randomly selected from the 9 enterprises. Face-to-face interviews were conducted by trained interviewers using a pre-designed questionnaire among all workers in the 31 workshops. After optimization of hypothetical work injury insurance scheme, the willingness to participate in the scheme increased from 73.87%to 80.96%; the average willingness to pay for the scheme increased from 2.21% (51.77 yuan) to 2.38% of monthly wage (54.93 Yuan); the median willingness to pay for the scheme increased from 1% to 1.2% of monthly wage, but decreased from 35 yuan to 30 yuan. The optimal model of hypothetical work injury insurance scheme covers all national and provincial statutory occupational diseases and work accidents, as well as consultations about occupational diseases. The scheme is supposed to be implemented worldwide by the National Social Security Department, without regional differences. The premium is borne by the state, enterprises, and individuals, and an independent insurance fund is kept in the lifetime personal account for each of insured individuals. The premium is not refunded in any event. Compensation for occupational diseases or work accidents is unrelated to the enterprises of the insured workers but related to the length of insurance. The insurance becomes effective one year after enrollment, while it is put into effect immediately after the occupational disease or accident occurs. The optimal model of hypothetical work injury insurance scheme actually realizes cross-regional mobility of workers, minimizes regional differences, and embodies the fairness. The proposed model will, to some extent, protect the rights and interests of enterprises, as well as the healthy rights and interests of workers when they are unemployed.

  5. Gröbner Bases and Generation of Difference Schemes for Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Gerdt, Vladimir P.; Blinkov, Yuri A.; Mozzhilkin, Vladimir V.

    2006-05-01

    In this paper we present an algorithmic approach to the generation of fully conservative difference schemes for linear partial differential equations. The approach is based on enlargement of the equations in their integral conservation law form by extra integral relations between unknown functions and their derivatives, and on discretization of the obtained system. The structure of the discrete system depends on numerical approximation methods for the integrals occurring in the enlarged system. As a result of the discretization, a system of linear polynomial difference equations is derived for the unknown functions and their partial derivatives. A difference scheme is constructed by elimination of all the partial derivatives. The elimination can be achieved by selecting a proper elimination ranking and by computing a Gröbner basis of the linear difference ideal generated by the polynomials in the discrete system. For these purposes we use the difference form of Janet-like Gröbner bases and their implementation in Maple. As illustration of the described methods and algorithms, we construct a number of difference schemes for Burgers and Falkowich-Karman equations and discuss their numerical properties.

  6. Multistage variable probability forest volume inventory. [the Defiance Unit of the Navajo Nation

    NASA Technical Reports Server (NTRS)

    Anderson, J. E. (Principal Investigator)

    1979-01-01

    An inventory scheme based on the use of computer processed LANDSAT MSS data was developed. Output from the inventory scheme provides an estimate of the standing net saw timber volume of a major timber species on a selected forested area of the Navajo Nation. Such estimates are based on the values of parameters currently used for scaled sawlog conversion to mill output. The multistage variable probability sampling appears capable of producing estimates which compare favorably with those produced using conventional techniques. In addition, the reduction in time, manpower, and overall costs lend it to numerous applications.

  7. A computerized scheme of SARS detection in early stage based on chest image of digital radiograph

    NASA Astrophysics Data System (ADS)

    Zheng, Zhong; Lan, Rihui; Lv, Guozheng

    2004-05-01

    A computerized scheme for early severe acute respiratory syndrome(SARS) lesion detection in digital chest radiographs is presented in this paper. The total scheme consists of two main parts: the first part is to determine suspect lesions by the theory of locally orderless images(LOI) and their spatial features; the second part is to select real lesions among these suspect ones by their frequent features. The method we used in the second part is firstly developed by Katsuragawa et al with necessary modification. Preliminary results indicate that these features are good criterions to tell early SARS lesions apart from other normal lung structures.

  8. Brain parcellation choice affects disease-related topology differences increasingly from global to local network levels.

    PubMed

    Lord, Anton; Ehrlich, Stefan; Borchardt, Viola; Geisler, Daniel; Seidel, Maria; Huber, Stefanie; Murr, Julia; Walter, Martin

    2016-03-30

    Network-based analyses of deviant brain function have become extremely popular in psychiatric neuroimaging. Underpinning brain network analyses is the selection of appropriate regions of interest (ROIs). Although ROI selection is fundamental in network analysis, its impact on detecting disease effects remains unclear. We investigated the impact of parcellation choice when comparing results from different studies. We investigated the effects of anatomical (AAL) and literature-based (Dosenbach) parcellation schemes on comparability of group differences in 35 female patients with anorexia nervosa and 35 age- and sex-matched healthy controls. Global and local network properties, including network-based statistics (NBS), were assessed on resting state functional magnetic resonance imaging data obtained at 3T. Parcellation schemes were comparably consistent on global network properties, while NBS and local metrics differed in location, but not metric type. Location of local metric alterations varied for AAL (parietal and cingulate cortices) versus Dosenbach (insula, thalamus) parcellation approaches. However, consistency was observed for the occipital cortex. Patient-specific global network properties can be robustly observed using different parcellation schemes, while graph metrics characterizing impairments of individual nodes vary considerably. Therefore, the impact of parcellation choice on specific group differences varies depending on the level of network organization. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks.

    PubMed

    Gui, Jinsong; Zhou, Kai; Xiong, Naixue

    2016-09-25

    Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude.

  10. A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks

    PubMed Central

    Gui, Jinsong; Zhou, Kai; Xiong, Naixue

    2016-01-01

    Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude. PMID:27681731

  11. UMDR: Multi-Path Routing Protocol for Underwater Ad Hoc Networks with Directional Antenna

    NASA Astrophysics Data System (ADS)

    Yang, Jianmin; Liu, Songzuo; Liu, Qipei; Qiao, Gang

    2018-01-01

    This paper presents a new routing scheme for underwater ad hoc networks based on directional antennas. Ad hoc networks with directional antennas have become a hot research topic because of space reuse may increase networks capacity. At present, researchers have applied traditional self-organizing routing protocols (such as DSR, AODV) [1] [2] on this type of networks, and the routing scheme is based on the shortest path metric. However, such routing schemes often suffer from long transmission delays and frequent link fragmentation along the intermediate nodes of the selected route. This is caused by a unique feature of directional transmission, often called as “deafness”. In this paper, we take a different approach to explore the advantages of space reuse through multipath routing. This paper introduces the validity of the conventional routing scheme in underwater ad hoc networks with directional antennas, and presents a special design of multipath routing algorithm for directional transmission. The experimental results show a significant performance improvement in throughput and latency.

  12. Fault Diagnosis for Centre Wear Fault of Roll Grinder Based on a Resonance Demodulation Scheme

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Shao, Yimin; Yin, Lei; Yuan, Yilin; Liu, Jing

    2017-05-01

    Roll grinder is one of the important parts in the rolling machinery, and the grinding precision of roll surface has direct influence on the surface quality of steel strip. However, during the grinding process, the centre bears the gravity of the roll and alternating stress. Therefore, wear or spalling faults are easily observed on the centre, which will lead to an anomalous vibration of the roll grinder. In this study, a resonance demodulation scheme is proposed to detect the centre wear fault of roll grinder. Firstly, fast kurtogram method is employed to help select the sub-band filter parameters for optimal resonance demodulation. Further, the envelope spectrum are derived based on the filtered signal. Finally, two health indicators are designed to conduct the fault diagnosis for centre wear fault. The proposed scheme is assessed by analysing experimental data from a roll grinder of twenty-high rolling mill. The results show that the proposed scheme can effectively detect the centre wear fault of the roll grinder.

  13. Source-Adaptation-Based Wireless Video Transport: A Cross-Layer Approach

    NASA Astrophysics Data System (ADS)

    Qu, Qi; Pei, Yong; Modestino, James W.; Tian, Xusheng

    2006-12-01

    Real-time packet video transmission over wireless networks is expected to experience bursty packet losses that can cause substantial degradation to the transmitted video quality. In wireless networks, channel state information is hard to obtain in a reliable and timely manner due to the rapid change of wireless environments. However, the source motion information is always available and can be obtained easily and accurately from video sequences. Therefore, in this paper, we propose a novel cross-layer framework that exploits only the motion information inherent in video sequences and efficiently combines a packetization scheme, a cross-layer forward error correction (FEC)-based unequal error protection (UEP) scheme, an intracoding rate selection scheme as well as a novel intraframe interleaving scheme. Our objective and subjective results demonstrate that the proposed approach is very effective in dealing with the bursty packet losses occurring on wireless networks without incurring any additional implementation complexity or delay. Thus, the simplicity of our proposed system has important implications for the implementation of a practical real-time video transmission system.

  14. Access Control Mechanism for IoT Environments Based on Modelling Communication Procedures as Resources.

    PubMed

    Cruz-Piris, Luis; Rivera, Diego; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R

    2018-03-20

    Internet growth has generated new types of services where the use of sensors and actuators is especially remarkable. These services compose what is known as the Internet of Things (IoT). One of the biggest current challenges is obtaining a safe and easy access control scheme for the data managed in these services. We propose integrating IoT devices in an access control system designed for Web-based services by modelling certain IoT communication elements as resources. This would allow us to obtain a unified access control scheme between heterogeneous devices (IoT devices, Internet-based services, etc.). To achieve this, we have analysed the most relevant communication protocols for these kinds of environments and then we have proposed a methodology which allows the modelling of communication actions as resources. Then, we can protect these resources using access control mechanisms. The validation of our proposal has been carried out by selecting a communication protocol based on message exchange, specifically Message Queuing Telemetry Transport (MQTT). As an access control scheme, we have selected User-Managed Access (UMA), an existing Open Authorization (OAuth) 2.0 profile originally developed for the protection of Internet services. We have performed tests focused on validating the proposed solution in terms of the correctness of the access control system. Finally, we have evaluated the energy consumption overhead when using our proposal.

  15. Access Control Mechanism for IoT Environments Based on Modelling Communication Procedures as Resources

    PubMed Central

    2018-01-01

    Internet growth has generated new types of services where the use of sensors and actuators is especially remarkable. These services compose what is known as the Internet of Things (IoT). One of the biggest current challenges is obtaining a safe and easy access control scheme for the data managed in these services. We propose integrating IoT devices in an access control system designed for Web-based services by modelling certain IoT communication elements as resources. This would allow us to obtain a unified access control scheme between heterogeneous devices (IoT devices, Internet-based services, etc.). To achieve this, we have analysed the most relevant communication protocols for these kinds of environments and then we have proposed a methodology which allows the modelling of communication actions as resources. Then, we can protect these resources using access control mechanisms. The validation of our proposal has been carried out by selecting a communication protocol based on message exchange, specifically Message Queuing Telemetry Transport (MQTT). As an access control scheme, we have selected User-Managed Access (UMA), an existing Open Authorization (OAuth) 2.0 profile originally developed for the protection of Internet services. We have performed tests focused on validating the proposed solution in terms of the correctness of the access control system. Finally, we have evaluated the energy consumption overhead when using our proposal. PMID:29558406

  16. Minimizing the semantic gap in biomedical content-based image retrieval

    NASA Astrophysics Data System (ADS)

    Guan, Haiying; Antani, Sameer; Long, L. Rodney; Thoma, George R.

    2010-03-01

    A major challenge in biomedical Content-Based Image Retrieval (CBIR) is to achieve meaningful mappings that minimize the semantic gap between the high-level biomedical semantic concepts and the low-level visual features in images. This paper presents a comprehensive learning-based scheme toward meeting this challenge and improving retrieval quality. The article presents two algorithms: a learning-based feature selection and fusion algorithm and the Ranking Support Vector Machine (Ranking SVM) algorithm. The feature selection algorithm aims to select 'good' features and fuse them using different similarity measurements to provide a better representation of the high-level concepts with the low-level image features. Ranking SVM is applied to learn the retrieval rank function and associate the selected low-level features with query concepts, given the ground-truth ranking of the training samples. The proposed scheme addresses four major issues in CBIR to improve the retrieval accuracy: image feature extraction, selection and fusion, similarity measurements, the association of the low-level features with high-level concepts, and the generation of the rank function to support high-level semantic image retrieval. It models the relationship between semantic concepts and image features, and enables retrieval at the semantic level. We apply it to the problem of vertebra shape retrieval from a digitized spine x-ray image set collected by the second National Health and Nutrition Examination Survey (NHANES II). The experimental results show an improvement of up to 41.92% in the mean average precision (MAP) over conventional image similarity computation methods.

  17. Evaluation of breeding strategies for polledness in dairy cattle using a newly developed simulation framework for quantitative and Mendelian traits.

    PubMed

    Scheper, Carsten; Wensch-Dorendorf, Monika; Yin, Tong; Dressel, Holger; Swalve, Herrmann; König, Sven

    2016-06-29

    Intensified selection of polled individuals has recently gained importance in predominantly horned dairy cattle breeds as an alternative to routine dehorning. The status quo of the current polled breeding pool of genetically-closely related artificial insemination sires with lower breeding values for performance traits raises questions regarding the effects of intensified selection based on this founder pool. We developed a stochastic simulation framework that combines the stochastic simulation software QMSim and a self-designed R program named QUALsim that acts as an external extension. Two traits were simulated in a dairy cattle population for 25 generations: one quantitative (QMSim) and one qualitative trait with Mendelian inheritance (i.e. polledness, QUALsim). The assignment scheme for qualitative trait genotypes initiated realistic initial breeding situations regarding allele frequencies, true breeding values for the quantitative trait and genetic relatedness. Intensified selection for polled cattle was achieved using an approach that weights estimated breeding values in the animal best linear unbiased prediction model for the quantitative trait depending on genotypes or phenotypes for the polled trait with a user-defined weighting factor. Selection response for the polled trait was highest in the selection scheme based on genotypes. Selection based on phenotypes led to significantly lower allele frequencies for polled. The male selection path played a significantly greater role for a fast dissemination of polled alleles compared to female selection strategies. Fixation of the polled allele implies selection based on polled genotypes among males. In comparison to a base breeding scenario that does not take polledness into account, intensive selection for polled substantially reduced genetic gain for this quantitative trait after 25 generations. Reducing selection intensity for polled males while maintaining strong selection intensity among females, simultaneously decreased losses in genetic gain and achieved a final allele frequency of 0.93 for polled. A fast transition to a completely polled population through intensified selection for polled was in contradiction to the preservation of high genetic gain for the quantitative trait. Selection on male polled genotypes with moderate weighting, and selection on female polled phenotypes with high weighting, could be a suitable compromise regarding all important breeding aspects.

  18. Service-Oriented Node Scheduling Scheme for Wireless Sensor Networks Using Markov Random Field Model

    PubMed Central

    Cheng, Hongju; Su, Zhihuang; Lloret, Jaime; Chen, Guolong

    2014-01-01

    Future wireless sensor networks are expected to provide various sensing services and energy efficiency is one of the most important criterions. The node scheduling strategy aims to increase network lifetime by selecting a set of sensor nodes to provide the required sensing services in a periodic manner. In this paper, we are concerned with the service-oriented node scheduling problem to provide multiple sensing services while maximizing the network lifetime. We firstly introduce how to model the data correlation for different services by using Markov Random Field (MRF) model. Secondly, we formulate the service-oriented node scheduling issue into three different problems, namely, the multi-service data denoising problem which aims at minimizing the noise level of sensed data, the representative node selection problem concerning with selecting a number of active nodes while determining the services they provide, and the multi-service node scheduling problem which aims at maximizing the network lifetime. Thirdly, we propose a Multi-service Data Denoising (MDD) algorithm, a novel multi-service Representative node Selection and service Determination (RSD) algorithm, and a novel MRF-based Multi-service Node Scheduling (MMNS) scheme to solve the above three problems respectively. Finally, extensive experiments demonstrate that the proposed scheme efficiently extends the network lifetime. PMID:25384005

  19. Quantifying and Reducing Uncertainties in Estimating OMI Tropospheric Column NO2 Trend over The United States

    NASA Astrophysics Data System (ADS)

    Smeltzer, C. D.; Wang, Y.; Boersma, F.; Celarier, E. A.; Bucsela, E. J.

    2013-12-01

    We investigate the effects of retrieval radiation schemes and parameters on trend analysis using tropospheric nitrogen dioxide (NO2) vertical column density (VCD) measurements over the United States. Ozone Monitoring Instrument (OMI) observations from 2005 through 2012 are used in this analysis. We investigated two radiation schemes, provided by National Aeronautics and Space Administration (NASA TOMRAD) and Koninklijk Nederlands Meteorologisch Instituut (KNMI DAK). In addition, we analyzed trend dependence on radiation parameters, including surface albedo and viewing geometry. The cross-track mean VCD average difference is 10-15% between the two radiation schemes in 2005. As the OMI anomaly developed and progressively worsens, the difference between the two schemes becomes larger. Furthermore, applying surface albedo measurements from the Moderate Resolution Imaging Spectroradiometer (MODIS) leads to increases of estimated NO2 VCD trends over high-emission regions. We find that the uncertainties of OMI-derived NO2 VCD trends can be reduced by up to a factor of 3 by selecting OMI cross-track rows on the basis of their performance over the ocean [see abstract figure]. Comparison of OMI tropospheric VCD trends to those estimated based on the EPA surface NO2 observations indicate using MODIS surface albedo data and a more narrow selection of OMI cross-track rows greatly improves the agreement of estimated trends between satellite and surface data. This figure shows the reduction of uncertainty in OMI NO2 trend by selecting OMI cross-track rows based on the performance over the ocean. With this technique, uncertainties within the seasonal trend may be reduced by a factor of 3 or more (blue) compared with only removing the anomalous rows: considering OMI cross-track rows 4-24 (red).

  20. Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David

    2009-02-01

    Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.

  1. Optimization of the scheme for natural ecology planning of urban rivers based on ANP (analytic network process) model.

    PubMed

    Zhang, Yichuan; Wang, Jiangping

    2015-07-01

    Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.

  2. An adaptive critic-based scheme for consensus control of nonlinear multi-agent systems

    NASA Astrophysics Data System (ADS)

    Heydari, Ali; Balakrishnan, S. N.

    2014-12-01

    The problem of decentralised consensus control of a network of heterogeneous nonlinear systems is formulated as an optimal tracking problem and a solution is proposed using an approximate dynamic programming based neurocontroller. The neurocontroller training comprises an initial offline training phase and an online re-optimisation phase to account for the fact that the reference signal subject to tracking is not fully known and available ahead of time, i.e., during the offline training phase. As long as the dynamics of the agents are controllable, and the communication graph has a directed spanning tree, this scheme guarantees the synchronisation/consensus even under switching communication topology and directed communication graph. Finally, an aerospace application is selected for the evaluation of the performance of the method. Simulation results demonstrate the potential of the scheme.

  3. Stationary Wavelet Transform and AdaBoost with SVM Based Pathological Brain Detection in MRI Scanning.

    PubMed

    Nayak, Deepak Ranjan; Dash, Ratnakar; Majhi, Banshidhar

    2017-01-01

    This paper presents an automatic classification system for segregating pathological brain from normal brains in magnetic resonance imaging scanning. The proposed system employs contrast limited adaptive histogram equalization scheme to enhance the diseased region in brain MR images. Two-dimensional stationary wavelet transform is harnessed to extract features from the preprocessed images. The feature vector is constructed using the energy and entropy values, computed from the level- 2 SWT coefficients. Then, the relevant and uncorrelated features are selected using symmetric uncertainty ranking filter. Subsequently, the selected features are given input to the proposed AdaBoost with support vector machine classifier, where SVM is used as the base classifier of AdaBoost algorithm. To validate the proposed system, three standard MR image datasets, Dataset-66, Dataset-160, and Dataset- 255 have been utilized. The 5 runs of k-fold stratified cross validation results indicate the suggested scheme offers better performance than other existing schemes in terms of accuracy and number of features. The proposed system earns ideal classification over Dataset-66 and Dataset-160; whereas, for Dataset- 255, an accuracy of 99.45% is achieved. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. A pyramid breeding of eight grain-yield related quantitative trait loci based on marker-assistant and phenotype selection in rice (Oryza sativa L.).

    PubMed

    Zong, Guo; Wang, Ahong; Wang, Lu; Liang, Guohua; Gu, Minghong; Sang, Tao; Han, Bin

    2012-07-20

    1000-Grain weight and spikelet number per panicle are two important components for rice grain yield. In our previous study, eight quantitative trait loci (QTLs) conferring spikelet number per panicle and 1000-grain weight were mapped through sequencing-based genotyping of 150 rice recombinant inbred lines (RILs). In this study, we validated the effects of four QTLs from Nipponbare using chromosome segment substitution lines (CSSLs), and pyramided eight grain yield related QTLs. The new lines containing the eight QTLs with positive effects showed increased panicle and spikelet size as compared with the parent variety 93-11. We further proposed a novel pyramid breeding scheme based on marker-assistant and phenotype selection (MAPS). This scheme allowed pyramiding of as many as 24 QTLs at a single hybridization without massive cross work. This study provided insights into the molecular basis of rice grain yield for direct wealth for high-yielding rice breeding. Copyright © 2012. Published by Elsevier Ltd.

  5. Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks

    PubMed Central

    Li, Jiayin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal

    2017-01-01

    Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs. PMID:29117152

  6. A generalized form of the Bernoulli Trial collision scheme in DSMC: Derivation and evaluation

    NASA Astrophysics Data System (ADS)

    Roohi, Ehsan; Stefanov, Stefan; Shoja-Sani, Ahmad; Ejraei, Hossein

    2018-02-01

    The impetus of this research is to present a generalized Bernoulli Trial collision scheme in the context of the direct simulation Monte Carlo (DSMC) method. Previously, a subsequent of several collision schemes have been put forward, which were mathematically based on the Kac stochastic model. These include Bernoulli Trial (BT), Ballot Box (BB), Simplified Bernoulli Trial (SBT) and Intelligent Simplified Bernoulli Trial (ISBT) schemes. The number of considered pairs for a possible collision in the above-mentioned schemes varies between N (l) (N (l) - 1) / 2 in BT, 1 in BB, and (N (l) - 1) in SBT or ISBT, where N (l) is the instantaneous number of particles in the lth cell. Here, we derive a generalized form of the Bernoulli Trial collision scheme (GBT) where the number of selected pairs is any desired value smaller than (N (l) - 1), i.e., Nsel < (N (l) - 1), keeping the same the collision frequency and accuracy of the solution as the original SBT and BT models. We derive two distinct formulas for the GBT scheme, where both formula recover BB and SBT limits if Nsel is set as 1 and N (l) - 1, respectively, and provide accurate solutions for a wide set of test cases. The present generalization further improves the computational efficiency of the BT-based collision models compared to the standard no time counter (NTC) and nearest neighbor (NN) collision models.

  7. Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks.

    PubMed

    Zheng, Haifeng; Li, Jiayin; Feng, Xinxin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal

    2017-11-08

    Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs .

  8. Selection of Plot Remeasurement in an Annual Inventory

    Treesearch

    Mark H. Hansen; Hans T. Schreuder; Dave Heinzen

    2000-01-01

    A plot selection approach is proposed based on experience from the Annual Forest Inventory System (AFIS) in the Aspen-Birch Unit of northestern Minnesota. The emphasisis on a mixture of strategies. Although the Agricultural Act of 1998 requires that a fixed 20 percent of plots be measured each year in each state, sooner or later we will need to vary the scheme to...

  9. A modified mass selection scheme for creating winter-hardy faba bean (Vicia faba L.) lines with a broad genetic base

    USDA-ARS?s Scientific Manuscript database

    Winter-hardy faba bean (Vicia faba L.) from northern Europe is represented by a rather narrow gene pool. Limited selection gains for overwintering beyond a maximum of -25°C have restricted the adoption of this crop. Therefore, the faba bean collection maintained by the USDA-ARS National Plant Germpl...

  10. The performance evaluation of a new neural network based traffic management scheme for a satellite communication network

    NASA Technical Reports Server (NTRS)

    Ansari, Nirwan; Liu, Dequan

    1991-01-01

    A neural-network-based traffic management scheme for a satellite communication network is described. The scheme consists of two levels of management. The front end of the scheme is a derivation of Kohonen's self-organization model to configure maps for the satellite communication network dynamically. The model consists of three stages. The first stage is the pattern recognition task, in which an exemplar map that best meets the current network requirements is selected. The second stage is the analysis of the discrepancy between the chosen exemplar map and the state of the network, and the adaptive modification of the chosen exemplar map to conform closely to the network requirement (input data pattern) by means of Kohonen's self-organization. On the basis of certain performance criteria, whether a new map is generated to replace the original chosen map is decided in the third stage. A state-dependent routing algorithm, which arranges the incoming call to some proper path, is used to make the network more efficient and to lower the call block rate. Simulation results demonstrate that the scheme, which combines self-organization and the state-dependent routing mechanism, provides better performance in terms of call block rate than schemes that only have either the self-organization mechanism or the routing mechanism.

  11. On fast carry select adders

    NASA Technical Reports Server (NTRS)

    Shamanna, M.; Whitaker, S.

    1992-01-01

    This paper presents an architecture for a high-speed carry select adder with very long bit lengths utilizing a conflict-free bypass scheme. The proposed scheme has almost half the number of transistors and is faster than a conventional carry select adder. A comparative study is also made between the proposed adder and a Manchester carry chain adder which shows that the proposed scheme has the same transistor count, without suffering any performance degradation, compared to the Manchester carry chain adder.

  12. Efficient multichannel acoustic echo cancellation using constrained tap selection schemes in the subband domain

    NASA Astrophysics Data System (ADS)

    Desiraju, Naveen Kumar; Doclo, Simon; Wolff, Tobias

    2017-12-01

    Acoustic echo cancellation (AEC) is a key speech enhancement technology in speech communication and voice-enabled devices. AEC systems employ adaptive filters to estimate the acoustic echo paths between the loudspeakers and the microphone(s). In applications involving surround sound, the computational complexity of an AEC system may become demanding due to the multiple loudspeaker channels and the necessity of using long filters in reverberant environments. In order to reduce the computational complexity, the approach of partially updating the AEC filters is considered in this paper. In particular, we investigate tap selection schemes which exploit the sparsity present in the loudspeaker channels for partially updating subband AEC filters. The potential for exploiting signal sparsity across three dimensions, namely time, frequency, and channels, is analyzed. A thorough analysis of different state-of-the-art tap selection schemes is performed and insights about their limitations are gained. A novel tap selection scheme is proposed which overcomes these limitations by exploiting signal sparsity while not ignoring any filters for update in the different subbands and channels. Extensive simulation results using both artificial as well as real-world multichannel signals show that the proposed tap selection scheme outperforms state-of-the-art tap selection schemes in terms of echo cancellation performance. In addition, it yields almost identical echo cancellation performance as compared to updating all filter taps at a significantly reduced computational cost.

  13. New hybrid reverse differential pulse position width modulation scheme for wireless optical communication

    NASA Astrophysics Data System (ADS)

    Liao, Renbo; Liu, Hongzhan; Qiao, Yaojun

    2014-05-01

    In order to improve the power efficiency and reduce the packet error rate of reverse differential pulse position modulation (RDPPM) for wireless optical communication (WOC), a hybrid reverse differential pulse position width modulation (RDPPWM) scheme is proposed, based on RDPPM and reverse pulse width modulation. Subsequently, the symbol structure of RDPPWM is briefly analyzed, and its performance is compared with that of other modulation schemes in terms of average transmitted power, bandwidth requirement, and packet error rate over ideal additive white Gaussian noise (AWGN) channels. Based on the given model, the simulation results show that the proposed modulation scheme has the advantages of improving the power efficiency and reducing the bandwidth requirement. Moreover, in terms of error probability performance, RDPPWM can achieve a much lower packet error rate than that of RDPPM. For example, at the same received signal power of -28 dBm, the packet error rate of RDPPWM can decrease to 2.6×10-12, while that of RDPPM is 2.2×10. Furthermore, RDPPWM does not need symbol synchronization at the receiving end. These considerations make RDPPWM a favorable candidate to select as the modulation scheme in the WOC systems.

  14. Key Management Scheme Based on Route Planning of Mobile Sink in Wireless Sensor Networks.

    PubMed

    Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Jiang, Shengming; Chen, Wei

    2016-01-29

    In many wireless sensor network application scenarios the key management scheme with a Mobile Sink (MS) should be fully investigated. This paper proposes a key management scheme based on dynamic clustering and optimal-routing choice of MS. The concept of Traveling Salesman Problem with Neighbor areas (TSPN) in dynamic clustering for data exchange is proposed, and the selection probability is used in MS route planning. The proposed scheme extends static key management to dynamic key management by considering the dynamic clustering and mobility of MSs, which can effectively balance the total energy consumption during the activities. Considering the different resources available to the member nodes and sink node, the session key between cluster head and MS is established by modified an ECC encryption with Diffie-Hellman key exchange (ECDH) algorithm and the session key between member node and cluster head is built with a binary symmetric polynomial. By analyzing the security of data storage, data transfer and the mechanism of dynamic key management, the proposed scheme has more advantages to help improve the resilience of the key management system of the network on the premise of satisfying higher connectivity and storage efficiency.

  15. Receding horizon online optimization for torque control of gasoline engines.

    PubMed

    Kang, Mingxin; Shen, Tielong

    2016-11-01

    This paper proposes a model-based nonlinear receding horizon optimal control scheme for the engine torque tracking problem. The controller design directly employs the nonlinear model exploited based on mean-value modeling principle of engine systems without any linearizing reformation, and the online optimization is achieved by applying the Continuation/GMRES (generalized minimum residual) approach. Several receding horizon control schemes are designed to investigate the effects of the integral action and integral gain selection. Simulation analyses and experimental validations are implemented to demonstrate the real-time optimization performance and control effects of the proposed torque tracking controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Using object-based image analysis to guide the selection of field sample locations

    USDA-ARS?s Scientific Manuscript database

    One of the most challenging tasks for resource management and research is designing field sampling schemes to achieve unbiased estimates of ecosystem parameters as efficiently as possible. This study focused on the potential of fine-scale image objects from object-based image analysis (OBIA) to be u...

  17. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  18. Performance Evaluation of Relay Selection Schemes in Beacon-Assisted Dual-Hop Cognitive Radio Wireless Sensor Networks under Impact of Hardware Noises.

    PubMed

    Hieu, Tran Dinh; Duy, Tran Trung; Dung, Le The; Choi, Seong Gon

    2018-06-05

    To solve the problem of energy constraints and spectrum scarcity for cognitive radio wireless sensor networks (CR-WSNs), an underlay decode-and-forward relaying scheme is considered, where the energy constrained secondary source and relay nodes are capable of harvesting energy from a multi-antenna power beacon (PB) and using that harvested energy to forward the source information to the destination. Based on the time switching receiver architecture, three relaying protocols, namely, hybrid partial relay selection (H-PRS), conventional opportunistic relay selection (C-ORS), and best opportunistic relay selection (B-ORS) protocols are considered to enhance the end-to-end performance under the joint impact of maximal interference constraint and transceiver hardware impairments. For performance evaluation and comparison, we derive the exact and asymptotic closed-form expressions of outage probability (OP) and throughput (TP) to provide significant insights into the impact of our proposed protocols on the system performance over Rayleigh fading channel. Finally, simulation results validate the theoretical results.

  19. A controlled ac Stark echo for quantum memories.

    PubMed

    Ham, Byoung S

    2017-08-09

    A quantum memory protocol of controlled ac Stark echoes (CASE) based on a double rephasing photon echo scheme via controlled Rabi flopping is proposed. The double rephasing scheme of photon echoes inherently satisfies the no-population inversion requirement for quantum memories, but the resultant absorptive echo remains a fundamental problem. Herein, it is reported that the first echo in the double rephasing scheme can be dynamically controlled so that it does not affect the second echo, which is accomplished by using unbalanced ac Stark shifts. Then, the second echo is coherently controlled to be emissive via controlled coherence conversion. Finally a near perfect ultralong CASE is presented using a backward echo scheme. Compared with other methods such as dc Stark echoes, the present protocol is all-optical with advantages of wavelength-selective dynamic control of quantum processing for erasing, buffering, and channel multiplexing.

  20. Wavelet subspace decomposition of thermal infrared images for defect detection in artworks

    NASA Astrophysics Data System (ADS)

    Ahmad, M. Z.; Khan, A. A.; Mezghani, S.; Perrin, E.; Mouhoubi, K.; Bodnar, J. L.; Vrabie, V.

    2016-07-01

    Health of ancient artworks must be routinely monitored for their adequate preservation. Faults in these artworks may develop over time and must be identified as precisely as possible. The classical acoustic testing techniques, being invasive, risk causing permanent damage during periodic inspections. Infrared thermometry offers a promising solution to map faults in artworks. It involves heating the artwork and recording its thermal response using infrared camera. A novel strategy based on pseudo-random binary excitation principle is used in this work to suppress the risks associated with prolonged heating. The objective of this work is to develop an automatic scheme for detecting faults in the captured images. An efficient scheme based on wavelet based subspace decomposition is developed which favors identification of, the otherwise invisible, weaker faults. Two major problems addressed in this work are the selection of the optimal wavelet basis and the subspace level selection. A novel criterion based on regional mutual information is proposed for the latter. The approach is successfully tested on a laboratory based sample as well as real artworks. A new contrast enhancement metric is developed to demonstrate the quantitative efficiency of the algorithm. The algorithm is successfully deployed for both laboratory based and real artworks.

  1. Underwater target classification using wavelet packets and neural networks.

    PubMed

    Azimi-Sadjadi, M R; Yao, D; Huang, Q; Dobeck, G J

    2000-01-01

    In this paper, a new subband-based classification scheme is developed for classifying underwater mines and mine-like targets from the acoustic backscattered signals. The system consists of a feature extractor using wavelet packets in conjunction with linear predictive coding (LPC), a feature selection scheme, and a backpropagation neural-network classifier. The data set used for this study consists of the backscattered signals from six different objects: two mine-like targets and four nontargets for several aspect angles. Simulation results on ten different noisy realizations and for signal-to-noise ratio (SNR) of 12 dB are presented. The receiver operating characteristic (ROC) curve of the classifier generated based on these results demonstrated excellent classification performance of the system. The generalization ability of the trained network was demonstrated by computing the error and classification rate statistics on a large data set. A multiaspect fusion scheme was also adopted in order to further improve the classification performance.

  2. Optimized Controller Design for a 12-Pulse Voltage Source Converter Based HVDC System

    NASA Astrophysics Data System (ADS)

    Agarwal, Ruchi; Singh, Sanjeev

    2017-12-01

    The paper proposes an optimized controller design scheme for power quality improvement in 12-pulse voltage source converter based high voltage direct current system. The proposed scheme is hybrid combination of golden section search and successive linear search method. The paper aims at reduction of current sensor and optimization of controller. The voltage and current controller parameters are selected for optimization due to its impact on power quality. The proposed algorithm for controller optimizes the objective function which is composed of current harmonic distortion, power factor, and DC voltage ripples. The detailed designs and modeling of the complete system are discussed and its simulation is carried out in MATLAB-Simulink environment. The obtained results are presented to demonstrate the effectiveness of the proposed scheme under different transient conditions such as load perturbation, non-linear load condition, voltage sag condition, and tapped load fault under one phase open condition at both points-of-common coupling.

  3. Learning-based position control of a closed-kinematic chain robot end-effector

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Zhou, Zhen-Lei

    1990-01-01

    A trajectory control scheme whose design is based on learning theory, for a six-degree-of-freedom (DOF) robot end-effector built to study robotic assembly of NASA hardwares in space is presented. The control scheme consists of two control systems: the feedback control system and the learning control system. The feedback control system is designed using the concept of linearization about a selected operating point, and the method of pole placement so that the closed-loop linearized system is stabilized. The learning control scheme consisting of PD-type learning controllers, provides additional inputs to improve the end-effector performance after each trial. Experimental studies performed on a 2 DOF end-effector built at CUA, for three tracking cases show that actual trajectories approach desired trajectories as the number of trials increases. The tracking errors are substantially reduced after only five trials.

  4. Selection of plot remeasurement in an annual inventory

    Treesearch

    Mark H. Hansen; Hans T. Schreuder; Dave Heinzen

    2000-01-01

    A plot selection approach is proposed based on experience from the Annual Forest Inventory System (AFIS) in the Aspen-Birch Unit of northeastern Minnesota. The emphasis is on a mixture of strategies. Although the Agricultural Act of 1998 requires that a fixed 20 percent of plots be measured each year in each state, sooner or later we will need to vary the scheme to...

  5. Error Reduction Program. [combustor performance evaluation codes

    NASA Technical Reports Server (NTRS)

    Syed, S. A.; Chiappetta, L. M.; Gosman, A. D.

    1985-01-01

    The details of a study to select, incorporate and evaluate the best available finite difference scheme to reduce numerical error in combustor performance evaluation codes are described. The combustor performance computer programs chosen were the two dimensional and three dimensional versions of Pratt & Whitney's TEACH code. The criteria used to select schemes required that the difference equations mirror the properties of the governing differential equation, be more accurate than the current hybrid difference scheme, be stable and economical, be compatible with TEACH codes, use only modest amounts of additional storage, and be relatively simple. The methods of assessment used in the selection process consisted of examination of the difference equation, evaluation of the properties of the coefficient matrix, Taylor series analysis, and performance on model problems. Five schemes from the literature and three schemes developed during the course of the study were evaluated. This effort resulted in the incorporation of a scheme in 3D-TEACH which is usuallly more accurate than the hybrid differencing method and never less accurate.

  6. Multi-criteria decision aid approach for the selection of the best compromise management scheme for ELVs: the case of Cyprus.

    PubMed

    Mergias, I; Moustakas, K; Papadopoulos, A; Loizidou, M

    2007-08-25

    Each alternative scheme for treating a vehicle at its end of life has its own consequences from a social, environmental, economic and technical point of view. Furthermore, the criteria used to determine these consequences are often contradictory and not equally important. In the presence of multiple conflicting criteria, an optimal alternative scheme never exists. A multiple-criteria decision aid (MCDA) method to aid the Decision Maker (DM) in selecting the best compromise scheme for the management of End-of-Life Vehicles (ELVs) is presented in this paper. The constitution of a set of alternatives schemes, the selection of a list of relevant criteria to evaluate these alternative schemes and the choice of an appropriate management system are also analyzed in this framework. The proposed procedure relies on the PROMETHEE method which belongs to the well-known family of multiple criteria outranking methods. For this purpose, level, linear and Gaussian functions are used as preference functions.

  7. A novel fractal image compression scheme with block classification and sorting based on Pearson's correlation coefficient.

    PubMed

    Wang, Jianji; Zheng, Nanning

    2013-09-01

    Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.

  8. Reducing the PAPR in FBMC-OQAM systems with low-latency trellis-based SLM technique

    NASA Astrophysics Data System (ADS)

    Bulusu, S. S. Krishna Chaitanya; Shaiek, Hmaied; Roviras, Daniel

    2016-12-01

    Filter-bank multi-carrier (FBMC) modulations, and more specifically FBMC-offset quadrature amplitude modulation (OQAM), are seen as an interesting alternative to orthogonal frequency division multiplexing (OFDM) for the 5th generation radio access technology. In this paper, we investigate the problem of peak-to-average power ratio (PAPR) reduction for FBMC-OQAM signals. Recently, it has been shown that FBMC-OQAM with trellis-based selected mapping (TSLM) scheme not only is superior to any scheme based on symbol-by-symbol approach but also outperforms that of the OFDM with classical SLM scheme. This paper is an extension of that work, where we analyze the TSLM in terms of computational complexity, required hardware memory, and latency issues. We have proposed an improvement to the TSLM, which requires very less hardware memory, compared to the originally proposed TSLM, and also have low latency. Additionally, the impact of the time duration of partial PAPR on the performance of TSLM is studied, and its lower bound has been identified by proposing a suitable time duration. Also, a thorough and fair comparison of performance has been done with an existing trellis-based scheme proposed in literature. The simulation results show that the proposed low-latency TSLM yields better PAPR reduction performance with relatively less hardware memory requirements.

  9. A framework to support decision making in the selection of sustainable drainage system design alternatives.

    PubMed

    Wang, Mingming; Sweetapple, Chris; Fu, Guangtao; Farmani, Raziyeh; Butler, David

    2017-10-01

    This paper presents a new framework for decision making in sustainable drainage system (SuDS) scheme design. It integrates resilience, hydraulic performance, pollution control, rainwater usage, energy analysis, greenhouse gas (GHG) emissions and costs, and has 12 indicators. The multi-criteria analysis methods of entropy weight and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) were selected to support SuDS scheme selection. The effectiveness of the framework is demonstrated with a SuDS case in China. Indicators used include flood volume, flood duration, a hydraulic performance indicator, cost and resilience. Resilience is an important design consideration, and it supports scheme selection in the case study. The proposed framework will help a decision maker to choose an appropriate design scheme for implementation without subjectivity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A multistage approach to improve performance of computer-aided detection of pulmonary embolisms depicted on CT images: preliminary investigation.

    PubMed

    Park, Sang Cheol; Chapman, Brian E; Zheng, Bin

    2011-06-01

    This study developed a computer-aided detection (CAD) scheme for pulmonary embolism (PE) detection and investigated several approaches to improve CAD performance. In the study, 20 computed tomography examinations with various lung diseases were selected, which include 44 verified PE lesions. The proposed CAD scheme consists of five basic steps: 1) lung segmentation; 2) PE candidate extraction using an intensity mask and tobogganing region growing; 3) PE candidate feature extraction; 4) false-positive (FP) reduction using an artificial neural network (ANN); and 5) a multifeature-based k-nearest neighbor for positive/negative classification. In this study, we also investigated the following additional methods to improve CAD performance: 1) grouping 2-D detected features into a single 3-D object; 2) selecting features with a genetic algorithm (GA); and 3) limiting the number of allowed suspicious lesions to be cued in one examination. The results showed that 1) CAD scheme using tobogganing, an ANN, and grouping method achieved the maximum detection sensitivity of 79.2%; 2) the maximum scoring method achieved the superior performance over other scoring fusion methods; 3) GA was able to delete "redundant" features and further improve CAD performance; and 4) limiting the maximum number of cued lesions in an examination reduced FP rate by 5.3 times. Combining these approaches, CAD scheme achieved 63.2% detection sensitivity with 18.4 FP lesions per examination. The study suggested that performance of CAD schemes for PE detection depends on many factors that include 1) optimizing the 2-D region grouping and scoring methods; 2) selecting the optimal feature set; and 3) limiting the number of allowed cueing lesions per examination.

  11. Multi-criteria decision-making on assessment of proposed tidal barrage schemes in terms of environmental impacts.

    PubMed

    Wu, Yunna; Xu, Chuanbo; Ke, Yiming; Chen, Kaifeng; Xu, Hu

    2017-12-15

    For tidal range power plants to be sustainable, the environmental impacts caused by the implement of various tidal barrage schemes must be assessed before construction. However, several problems exist in the current researches: firstly, evaluation criteria of the tidal barrage schemes environmental impact assessment (EIA) are not adequate; secondly, uncertainty of criteria information fails to be processed properly; thirdly, correlation among criteria is unreasonably measured. Hence the contributions of this paper are as follows: firstly, an evaluation criteria system is established from three dimensions of hydrodynamic, biological and morphological aspects. Secondly, cloud model is applied to describe the uncertainty of criteria information. Thirdly, Choquet integral with respect to λ-fuzzy measure is introduced to measure the correlation among criteria. On the above bases, a multi-criteria decision-making decision framework for tidal barrage scheme EIA is established to select the optimal scheme. Finally, a case study demonstrates the effectiveness of the proposed framework. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Valence bond and enzyme catalysis: a time to break down and a time to build up.

    PubMed

    Sharir-Ivry, Avital; Varatharaj, Rajapandian; Shurki, Avital

    2015-05-04

    Understanding enzyme catalysis and developing ability to control of it are two great challenges in biochemistry. A few successful examples of computational-based enzyme design have proved the fantastic potential of computational approaches in this field, however, relatively modest rate enhancements have been reported and the further development of complementary methods is still required. Herein we propose a conceptually simple scheme to identify the specific role that each residue plays in catalysis. The scheme is based on a breakdown of the total catalytic effect into contributions of individual protein residues, which are further decomposed into chemically interpretable components by using valence bond theory. The scheme is shown to shed light on the origin of catalysis in wild-type haloalkane dehalogenase (wt-DhlA) and its mutants. Furthermore, the understanding gained through our scheme is shown to have great potential in facilitating the selection of non-optimal sites for catalysis and suggesting effective mutations to enhance the enzymatic rate. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. A Trust-Based Adaptive Probability Marking and Storage Traceback Scheme for WSNs

    PubMed Central

    Liu, Anfeng; Liu, Xiao; Long, Jun

    2016-01-01

    Security is a pivotal issue for wireless sensor networks (WSNs), which are emerging as a promising platform that enables a wide range of military, scientific, industrial and commercial applications. Traceback, a key cyber-forensics technology, can play an important role in tracing and locating a malicious source to guarantee cybersecurity. In this work a trust-based adaptive probability marking and storage (TAPMS) traceback scheme is proposed to enhance security for WSNs. In a TAPMS scheme, the marking probability is adaptively adjusted according to the security requirements of the network and can substantially reduce the number of marking tuples and improve network lifetime. More importantly, a high trust node is selected to store marking tuples, which can avoid the problem of marking information being lost. Experimental results show that the total number of marking tuples can be reduced in a TAPMS scheme, thus improving network lifetime. At the same time, since the marking tuples are stored in high trust nodes, storage reliability can be guaranteed, and the traceback time can be reduced by more than 80%. PMID:27043566

  14. Development of smart piezoelectric transducer self-sensing, self-diagnosis and tuning schemes for structural health monitoring applications

    NASA Astrophysics Data System (ADS)

    Lee, Sang Jun

    Autonomous structural health monitoring (SHM) systems using active sensing devices have been studied extensively to diagnose the current state of aerospace, civil infrastructure and mechanical systems in near real-time and aims to eventually reduce life-cycle costs by replacing current schedule-based maintenance with condition-based maintenance. This research develops four schemes for SHM applications: (1) a simple and reliable PZT transducer self-sensing scheme; (2) a smart PZT self-diagnosis scheme; (3) an instantaneous reciprocity-based PZT diagnosis scheme; and (4) an effective PZT transducer tuning scheme. First, this research develops a PZT transducer self-sensing scheme, which is a necessary condition to accomplish a PZT transducer self-diagnosis. Main advantages of the proposed self-sensing approach are its simplicity and adaptability. The necessary hardware is only an additional self-sensing circuit which includes a minimum of electric components. With this circuit, the self-sensing parameters can be calibrated instantaneously in the presence of changing operational and environmental conditions of the system. In particular, this self-sensing scheme focuses on estimating the mechanical response in the time domain for the subsequent applications of the PZT transducer self-diagnosis and tuning with guided wave propagation. The most significant challenge of this self-sensing comes from the fact that the magnitude of the mechanical response is generally several orders of magnitude smaller than that of the input signal. The proposed self-sensing scheme fully takes advantage of the fact that any user-defined input signals can be applied to a host structure and the input waveform is known. The performance of the proposed self-sensing scheme is demonstrated by theoretical analysis, numerical simulations and various experiments. Second, this research proposes a smart PZT transducer self-diagnosis scheme based on the developed self-sensing scheme. Conventionally, the capacitance change of the PZT wafer is monitored to identify the abnormal PZT condition because the capacitance of the PZT wafer is linearly proportional to its size and also related to the bonding condition. However, temperature variation is another primary factor that affects the PZT capacitance. To ensure the reliable transducer self-diagnosis, two different self-diagnosis features are proposed to differentiate two main PZT wafer defects, i.e., PZT debonding and PZT cracking, from temperature variations and structural damages. The PZT debonding is identified using two indices based on time reversal process (TRP) without any baseline data. Also, the PZT cracking is identified by monitoring the change of the generated Lamb wave power ratio index with respect to the driving frequency. The uniqueness of this self-diagnosis scheme is that the self-diagnosis features can differentiate the PZT defects from environmental variations and structural damages. Therefore, it is expected to minimize false-alarms which are induced by operational or environmental variations as well as structural damages. The applicability of the proposed self-diagnosis scheme is verified by theoretical analysis, numerical simulations, and experimental tests. Third, a new methodology of guided wave-based PZT transducer diagnosis is developed to identify PZT transducer defects without using prior baseline data. This methodology can be applied when a number of same-size PZT transducers are attached to a target structure to form a sensor network. The advantage of the proposed technique is that abnormal PZT transducers among intact PZT transducers can be detected even when the system being monitored is subjected to varying operational and environmental conditions or changing structural conditions. To achieve this goal, the proposed diagnosis technique utilizes the linear reciprocity of guided wave propagation between a pair of surface-bonded PZT transducers. Finally, a PZT transducer tuning scheme is being developed for selective Lamb wave excitation and sensing. This is useful for structural damage detection based on Lamb wave propagation because the proper transducer size and the corresponding input frequency can be is crucial for selective Lamb wave excitation and sensing. The circular PZT response model is derived, and the energy balance is included for a better prediction of the PZT responses because the existing PZT response models do not consider any energy balance between Lamb wave modes. In addition, two calibration methods are also suggested in order to model the PZT responses more accurately by considering a bonding layer effect. (Abstract shortened by UMI.)

  15. Secure Cluster Head Sensor Elections Using Signal Strength Estimation and Ordered Transmissions

    PubMed Central

    Wang, Gicheol; Cho, Gihwan

    2009-01-01

    In clustered sensor networks, electing CHs (Cluster Heads) in a secure manner is very important because they collect data from sensors and send the aggregated data to the sink. If a compromised node is elected as a CH, it can illegally acquire data from all the members and even send forged data to the sink. Nevertheless, most of the existing CH election schemes have not treated the problem of the secure CH election. Recently, random value based protocols have been proposed to resolve the secure CH election problem. However, these schemes cannot prevent an attacker from suppressing its contribution for the change of CH election result and from selectively forwarding its contribution for the disagreement of CH election result. In this paper, we propose a modified random value scheme to prevent these disturbances. Our scheme dynamically adjusts the forwarding order of contributions and discards a received contribution when its signal strength is lower than the specified level to prevent these malicious actions. The simulation results have shown that our scheme effectively prevents attackers from changing and splitting an agreement of CH election result. Also, they have shown that our scheme is relatively energy-efficient than other schemes. PMID:22408550

  16. Hierarchical content-based image retrieval by dynamic indexing and guided search

    NASA Astrophysics Data System (ADS)

    You, Jane; Cheung, King H.; Liu, James; Guo, Linong

    2003-12-01

    This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include: a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing, an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features.

  17. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  18. Improving groundwater predictions utilizing seasonal precipitation forecasts from general circulation models forced with sea surface temperature forecasts

    USGS Publications Warehouse

    Almanaseer, Naser; Sankarasubramanian, A.; Bales, Jerad

    2014-01-01

    Recent studies have found a significant association between climatic variability and basin hydroclimatology, particularly groundwater levels, over the southeast United States. The research reported in this paper evaluates the potential in developing 6-month-ahead groundwater-level forecasts based on the precipitation forecasts from ECHAM 4.5 General Circulation Model Forced with Sea Surface Temperature forecasts. Ten groundwater wells and nine streamgauges from the USGS Groundwater Climate Response Network and Hydro-Climatic Data Network were selected to represent groundwater and surface water flows, respectively, having minimal anthropogenic influences within the Flint River Basin in Georgia, United States. The writers employ two low-dimensional models [principle component regression (PCR) and canonical correlation analysis (CCA)] for predicting groundwater and streamflow at both seasonal and monthly timescales. Three modeling schemes are considered at the beginning of January to predict winter (January, February, and March) and spring (April, May, and June) streamflow and groundwater for the selected sites within the Flint River Basin. The first scheme (model 1) is a null model and is developed using PCR for every streamflow and groundwater site using previous 3-month observations (October, November, and December) available at that particular site as predictors. Modeling schemes 2 and 3 are developed using PCR and CCA, respectively, to evaluate the role of precipitation forecasts in improving monthly and seasonal groundwater predictions. Modeling scheme 3, which employs a CCA approach, is developed for each site by considering observed groundwater levels from nearby sites as predictands. The performance of these three schemes is evaluated using two metrics (correlation coefficient and relative RMS error) by developing groundwater-level forecasts based on leave-five-out cross-validation. Results from the research reported in this paper show that using precipitation forecasts in climate models improves the ability to predict the interannual variability of winter and spring streamflow and groundwater levels over the basin. However, significant conditional bias exists in all the three modeling schemes, which indicates the need to consider improved modeling schemes as well as the availability of longer time-series of observed hydroclimatic information over the basin.

  19. Experimental investigation of colorless ONU employing superstructured fiber Bragg gratings in WDM/OCDMA-PON

    NASA Astrophysics Data System (ADS)

    Wang, Dawei; Cheng, Liang; Chen, Biao

    2009-11-01

    Colorless optical network unit (ONU) is a very important concept for the wavelength division multiplexing (WDM) based passive optical networks (PON). We present a novel scheme to construct non-wavelength-selective ONUs in WDM/OCDMA-PON by making use of the broad spectrum band of superstructure fiber Bragg gratings (SSFBGs). The experiment results reveal that the spectrum-sliced encoded signals from different wavelength channels can be successfully decoded with the same SSFBGs, and thus the proposed colorless ONU scheme is proved to be feasible.

  20. Applicability of Separation Potentials to Determining the Parameters of Cascade Efficiency in Enrichment of Ternary Mixtures

    NASA Astrophysics Data System (ADS)

    Palkin, V. A.; Igoshin, I. S.

    2017-01-01

    The separation potentials suggested by various researchers for separating multicomponent isotopic mixtures are considered. An estimation of their applicability to determining the parameters of the efficiency of enrichment of a ternary mixture in a cascade with an optimum scheme of connection of stages made up of elements with three takeoffs is carried out. The separation potential most precisely characterizing the separative power and other efficiency parameters of stages and cascade schemes has been selected based on the results of the estimation made.

  1. Research on Knowledge-Based Optimization Method of Indoor Location Based on Low Energy Bluetooth

    NASA Astrophysics Data System (ADS)

    Li, C.; Li, G.; Deng, Y.; Wang, T.; Kang, Z.

    2017-09-01

    With the rapid development of LBS (Location-based Service), the demand for commercialization of indoor location has been increasing, but its technology is not perfect. Currently, the accuracy of indoor location, the complexity of the algorithm, and the cost of positioning are hard to be simultaneously considered and it is still restricting the determination and application of mainstream positioning technology. Therefore, this paper proposes a method of knowledge-based optimization of indoor location based on low energy Bluetooth. The main steps include: 1) The establishment and application of a priori and posterior knowledge base. 2) Primary selection of signal source. 3) Elimination of positioning gross error. 4) Accumulation of positioning knowledge. The experimental results show that the proposed algorithm can eliminate the signal source of outliers and improve the accuracy of single point positioning in the simulation data. The proposed scheme is a dynamic knowledge accumulation rather than a single positioning process. The scheme adopts cheap equipment and provides a new idea for the theory and method of indoor positioning. Moreover, the performance of the high accuracy positioning results in the simulation data shows that the scheme has a certain application value in the commercial promotion.

  2. Reliable Geographical Forwarding in Cognitive Radio Sensor Networks Using Virtual Clusters

    PubMed Central

    Zubair, Suleiman; Fisal, Norsheila

    2014-01-01

    The need for implementing reliable data transfer in resource-constrained cognitive radio ad hoc networks is still an open issue in the research community. Although geographical forwarding schemes are characterized by their low overhead and efficiency in reliable data transfer in traditional wireless sensor network, this potential is still yet to be utilized for viable routing options in resource-constrained cognitive radio ad hoc networks in the presence of lossy links. In this paper, a novel geographical forwarding technique that does not restrict the choice of the next hop to the nodes in the selected route is presented. This is achieved by the creation of virtual clusters based on spectrum correlation from which the next hop choice is made based on link quality. The design maximizes the use of idle listening and receiver contention prioritization for energy efficiency, the avoidance of routing hot spots and stability. The validation result, which closely follows the simulation result, shows that the developed scheme can make more advancement to the sink as against the usual decisions of relevant ad hoc on-demand distance vector route select operations, while ensuring channel quality. Further simulation results have shown the enhanced reliability, lower latency and energy efficiency of the presented scheme. PMID:24854362

  3. A novel two-stage evaluation system based on a Group-G1 approach to identify appropriate emergency treatment technology schemes in sudden water source pollution accidents.

    PubMed

    Qu, Jianhua; Meng, Xianlin; Hu, Qi; You, Hong

    2016-02-01

    Sudden water source pollution resulting from hazardous materials has gradually become a major threat to the safety of the urban water supply. Over the past years, various treatment techniques have been proposed for the removal of the pollutants to minimize the threat of such pollutions. Given the diversity of techniques available, the current challenge is how to scientifically select the most desirable alternative for different threat degrees. Therefore, a novel two-stage evaluation system was developed based on a circulation-correction improved Group-G1 method to determine the optimal emergency treatment technology scheme, considering the areas of contaminant elimination in both drinking water sources and water treatment plants. In stage 1, the threat degree caused by the pollution was predicted using a threat evaluation index system and was subdivided into four levels. Then, a technique evaluation index system containing four sets of criteria weights was constructed in stage 2 to obtain the optimum treatment schemes corresponding to the different threat levels. The applicability of the established evaluation system was tested by a practical cadmium-contaminated accident that occurred in 2012. The results show this system capable of facilitating scientific analysis in the evaluation and selection of emergency treatment technologies for drinking water source security.

  4. Thermal control extravehicular life support system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The results of a comprehensive study which defined an Extravehicular Life Support System Thermal Control System (TCS) are presented. The design of the prototype hardware and a detail summary of the prototype TCS fabrication and test effort are given. Several heat rejection subsystems, water management subsystems, humidity control subsystems, pressure control schemes and temperature control schemes were evaluated. Alternative integrated TCS systems were studied, and an optimum system was selected based on quantitative weighing of weight, volume, cost, complexity and other factors. The selected subsystem contains a sublimator for heat rejection, bubble expansion tank for water management, a slurper and rotary separator for humidity control, and a pump, a temperature control valve, a gas separator and a vehicle umbilical connector for water transport. The prototype hardware complied with program objectives.

  5. Physical angular momentum separation for QED

    NASA Astrophysics Data System (ADS)

    Sun, Weimin

    2017-04-01

    We study the non-uniqueness problem of the gauge-invariant angular momentum separation for the case of QED, which stems from the recent controversy concerning the proper definitions of the orbital angular momentum and spin operator of the individual parts of a gauge field system. For the free quantum electrodynamics without matter, we show that the basic requirement of Euclidean symmetry selects a unique physical angular momentum separation scheme from the multitude of the possible angular momentum separation schemes constructed using the various gauge-invariant extensions (GIEs). Based on these results, we propose a set of natural angular momentum separation schemes for the case of interacting QED by invoking the formalism of asymptotic fields. Some perspectives on such a problem for the case of QCD are briefly discussed.

  6. Measurement-device-independent quantum key distribution with multiple crystal heralded source with post-selection

    NASA Astrophysics Data System (ADS)

    Chen, Dong; Shang-Hong, Zhao; MengYi, Deng

    2018-03-01

    The multiple crystal heralded source with post-selection (MHPS), originally introduced to improve the single-photon character of the heralded source, has specific applications for quantum information protocols. In this paper, by combining decoy-state measurement-device-independent quantum key distribution (MDI-QKD) with spontaneous parametric downconversion process, we present a modified MDI-QKD scheme with MHPS where two architectures are proposed corresponding to symmetric scheme and asymmetric scheme. The symmetric scheme, which linked by photon switches in a log-tree structure, is adopted to overcome the limitation of the current low efficiency of m-to-1 optical switches. The asymmetric scheme, which shows a chained structure, is used to cope with the scalability issue with increase in the number of crystals suffered in symmetric scheme. The numerical simulations show that our modified scheme has apparent advances both in transmission distance and key generation rate compared to the original MDI-QKD with weak coherent source and traditional heralded source with post-selection. Furthermore, the recent advances in integrated photonics suggest that if built into a single chip, the MHPS might be a practical alternative source in quantum key distribution tasks requiring single photons to work.

  7. Evaluation and determination of soil remediation schemes using a modified AHP model and its application in a contaminated coking plant.

    PubMed

    Li, Xingang; Li, Jia; Sui, Hong; He, Lin; Cao, Xingtao; Li, Yonghong

    2018-07-05

    Soil remediation has been considered as one of the most difficult pollution treatment tasks due to its high complexity in contaminants, geological conditions, usage, urgency, etc. The diversity in remediation technologies further makes quick selection of suitable remediation schemes much tougher even the site investigation has been done. Herein, a sustainable decision support hierarchical model has been developed to select, evaluate and determine preferred soil remediation schemes comprehensively based on modified analytic hierarchy process (MAHP). This MAHP method combines competence model and the Grubbs criteria with the conventional AHP. It not only considers the competence differences among experts in group decision, but also adjusts the big deviation caused by different experts' preference through sample analysis. This conversion allows the final remediation decision more reasonable. In this model, different evaluation criteria, including economic effect, environmental effect and technological effect, are employed to evaluate the integrated performance of remediation schemes followed by a strict computation using above MAHP. To confirm the feasibility of this developed model, it has been tested by a benzene workshop contaminated site in Beijing coking plant. Beyond soil remediation, this MAHP model would also be applied in other fields referring to multi-criteria group decision making. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Predictability of Seasonal Rainfall over the Greater Horn of Africa

    NASA Astrophysics Data System (ADS)

    Ngaina, J. N.

    2016-12-01

    The El Nino-Southern Oscillation (ENSO) is a primary mode of climate variability in the Greater of Africa (GHA). The expected impacts of climate variability and change on water, agriculture, and food resources in GHA underscore the importance of reliable and accurate seasonal climate predictions. The study evaluated different model selection criteria which included the Coefficient of determination (R2), Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Fisher information approximation (FIA). A forecast scheme based on the optimal model was developed to predict the October-November-December (OND) and March-April-May (MAM) rainfall. The predictability of GHA rainfall based on ENSO was quantified based on composite analysis, correlations and contingency tables. A test for field-significance considering the properties of finiteness and interdependence of the spatial grid was applied to avoid correlations by chance. The study identified FIA as the optimal model selection criterion. However, complex model selection criteria (FIA followed by BIC) performed better compared to simple approach (R2 and AIC). Notably, operational seasonal rainfall predictions over the GHA makes of simple model selection procedures e.g. R2. Rainfall is modestly predictable based on ENSO during OND and MAM seasons. El Nino typically leads to wetter conditions during OND and drier conditions during MAM. The correlations of ENSO indices with rainfall are statistically significant for OND and MAM seasons. Analysis based on contingency tables shows higher predictability of OND rainfall with the use of ENSO indices derived from the Pacific and Indian Oceans sea surfaces showing significant improvement during OND season. The predictability based on ENSO for OND rainfall is robust on a decadal scale compared to MAM. An ENSO-based scheme based on an optimal model selection criterion can thus provide skillful rainfall predictions over GHA. This study concludes that the negative phase of ENSO (La Niña) leads to dry conditions while the positive phase of ENSO (El Niño) anticipates enhanced wet conditions

  9. Sensitivity of CONUS Summer Rainfall to the Selection of Cumulus Parameterization Schemes in NU-WRF Seasonal Simulations

    NASA Technical Reports Server (NTRS)

    Iguchi, Takamichi; Tao, Wei-Kuo; Wu, Di; Peters-Lidard, Christa; Santanello, Joseph A.; Kemp, Eric; Tian, Yudong; Case, Jonathan; Wang, Weile; Ferraro, Robert; hide

    2017-01-01

    This study investigates the sensitivity of daily rainfall rates in regional seasonal simulations over the contiguous United States (CONUS) to different cumulus parameterization schemes. Daily rainfall fields were simulated at 24-km resolution using the NASA-Unified Weather Research and Forecasting (NU-WRF) Model for June-August 2000. Four cumulus parameterization schemes and two options for shallow cumulus components in a specific scheme were tested. The spread in the domain-mean rainfall rates across the parameterization schemes was generally consistent between the entire CONUS and most subregions. The selection of the shallow cumulus component in a specific scheme had more impact than that of the four cumulus parameterization schemes. Regional variability in the performance of each scheme was assessed by calculating optimally weighted ensembles that minimize full root-mean-square errors against reference datasets. The spatial pattern of the seasonally averaged rainfall was insensitive to the selection of cumulus parameterization over mountainous regions because of the topographical pattern constraint, so that the simulation errors were mostly attributed to the overall bias there. In contrast, the spatial patterns over the Great Plains regions as well as the temporal variation over most parts of the CONUS were relatively sensitive to cumulus parameterization selection. Overall, adopting a single simulation result was preferable to generating a better ensemble for the seasonally averaged daily rainfall simulation, as long as their overall biases had the same positive or negative sign. However, an ensemble of multiple simulation results was more effective in reducing errors in the case of also considering temporal variation.

  10. Determining Consumer Preference for Furniture Product Characteristics

    ERIC Educational Resources Information Center

    Turner, Carolyn S.; Edwards, Kay P.

    1974-01-01

    The paper describes instruments for determining preferences of consumers for selected product characteristics associated with furniture choices--specifically style, color, color scheme, texture, and materials--and the procedures for administration of those instruments. Results are based on a random sampling of public housing residents. (Author/MW)

  11. Site selection model for new metro stations based on land use

    NASA Astrophysics Data System (ADS)

    Zhang, Nan; Chen, Xuewu

    2015-12-01

    Since the construction of metro system generally lags behind the development of urban land use, sites of metro stations should adapt to their surrounding situations, which was rarely discussed by previous research on station layout. This paper proposes a new site selection model to find the best location for a metro station, establishing the indicator system based on land use and combining AHP with entropy weight method to obtain the schemes' ranking. The feasibility and efficiency of this model has been validated by evaluating Nanjing Shengtai Road station and other potential sites.

  12. A malware detection scheme based on mining format information.

    PubMed

    Bai, Jinrong; Wang, Junfeng; Zou, Guozhong

    2014-01-01

    Malware has become one of the most serious threats to computer information system and the current malware detection technology still has very significant limitations. In this paper, we proposed a malware detection approach by mining format information of PE (portable executable) files. Based on in-depth analysis of the static format information of the PE files, we extracted 197 features from format information of PE files and applied feature selection methods to reduce the dimensionality of the features and achieve acceptable high performance. When the selected features were trained using classification algorithms, the results of our experiments indicate that the accuracy of the top classification algorithm is 99.1% and the value of the AUC is 0.998. We designed three experiments to evaluate the performance of our detection scheme and the ability of detecting unknown and new malware. Although the experimental results of identifying new malware are not perfect, our method is still able to identify 97.6% of new malware with 1.3% false positive rates.

  13. A Malware Detection Scheme Based on Mining Format Information

    PubMed Central

    Bai, Jinrong; Wang, Junfeng; Zou, Guozhong

    2014-01-01

    Malware has become one of the most serious threats to computer information system and the current malware detection technology still has very significant limitations. In this paper, we proposed a malware detection approach by mining format information of PE (portable executable) files. Based on in-depth analysis of the static format information of the PE files, we extracted 197 features from format information of PE files and applied feature selection methods to reduce the dimensionality of the features and achieve acceptable high performance. When the selected features were trained using classification algorithms, the results of our experiments indicate that the accuracy of the top classification algorithm is 99.1% and the value of the AUC is 0.998. We designed three experiments to evaluate the performance of our detection scheme and the ability of detecting unknown and new malware. Although the experimental results of identifying new malware are not perfect, our method is still able to identify 97.6% of new malware with 1.3% false positive rates. PMID:24991639

  14. Intergeneric somatic hybrid plants of Citrus sinensis cv. Hamlin and Poncirus trifoliata cv. Flying Dragon.

    PubMed

    Grosser, J W; Gmitter, F G; Chandler, J L

    1988-01-01

    Intergeneric somatic hybrid plants between 'Hamlin' sweet orange [Citrus sinensis (L.) Osbeck] and 'Flying Dragon' trifoliate orange (Poncirus trifoliata Raf.) were regenerated following protoplast fusion. 'Hamlin' protoplasts, isolated from an habituated embryogenic suspension culture, were fused chemically with 'Flying Dragon' protoplasts isolated from juvenile leaf tissue. The hybrid selection scheme was based on complementation of the regenerative ability of the 'Hamlin' protoplasts with the subsequent expression of the trifoliate leaf character of 'Flying Dragon.' Hybrid plants were regenerated via somatic embryogenesis and multiplied organogenically. Hybrid morphology was intermediate to that of the parents. Chromosome counts indicated that the hybrids were allotetraploids (2n=4x=36). Malate dehydrogenase (MDH) isozyme patterns confirmed the hybrid nature of the regenerated plants. These genetically unique somatic hybrid plants will be evaluated for citrus rootstock potential. The cell fusion, selection, and regeneration scheme developed herein should provide a general means to expand the germplasm base of cultivated Citrus by intergeneric hybridization with related sexually incompatible genera.

  15. A hybrid feature selection and health indicator construction scheme for delay-time-based degradation modelling of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Deng, Congying; Zhang, Yi

    2018-03-01

    Rolling element bearings are mechanical components used frequently in most rotating machinery and they are also vulnerable links representing the main source of failures in such systems. Thus, health condition monitoring and fault diagnosis of rolling element bearings have long been studied to improve operational reliability and maintenance efficiency of rotatory machines. Over the past decade, prognosis that enables forewarning of failure and estimation of residual life attracted increasing attention. To accurately and efficiently predict failure of the rolling element bearing, the degradation requires to be well represented and modelled. For this purpose, degradation of the rolling element bearing is analysed with the delay-time-based model in this paper. Also, a hybrid feature selection and health indicator construction scheme is proposed for extraction of the bearing health relevant information from condition monitoring sensor data. Effectiveness of the presented approach is validated through case studies on rolling element bearing run-to-failure experiments.

  16. High-order conservative finite difference GLM-MHD schemes for cell-centered MHD

    NASA Astrophysics Data System (ADS)

    Mignone, Andrea; Tzeferacos, Petros; Bodo, Gianluigi

    2010-08-01

    We present and compare third- as well as fifth-order accurate finite difference schemes for the numerical solution of the compressible ideal MHD equations in multiple spatial dimensions. The selected methods lean on four different reconstruction techniques based on recently improved versions of the weighted essentially non-oscillatory (WENO) schemes, monotonicity preserving (MP) schemes as well as slope-limited polynomial reconstruction. The proposed numerical methods are highly accurate in smooth regions of the flow, avoid loss of accuracy in proximity of smooth extrema and provide sharp non-oscillatory transitions at discontinuities. We suggest a numerical formulation based on a cell-centered approach where all of the primary flow variables are discretized at the zone center. The divergence-free condition is enforced by augmenting the MHD equations with a generalized Lagrange multiplier yielding a mixed hyperbolic/parabolic correction, as in Dedner et al. [J. Comput. Phys. 175 (2002) 645-673]. The resulting family of schemes is robust, cost-effective and straightforward to implement. Compared to previous existing approaches, it completely avoids the CPU intensive workload associated with an elliptic divergence cleaning step and the additional complexities required by staggered mesh algorithms. Extensive numerical testing demonstrate the robustness and reliability of the proposed framework for computations involving both smooth and discontinuous features.

  17. Optimization of Swine Breeding Programs Using Genomic Selection with ZPLAN+

    PubMed Central

    Lopez, B. M.; Kang, H. S.; Kim, T. H.; Viterbo, V. S.; Kim, H. S.; Na, C. S.; Seo, K. S.

    2016-01-01

    The objective of this study was to evaluate the present conventional selection program of a swine nucleus farm and compare it with a new selection strategy employing genomic enhanced breeding value (GEBV) as the selection criteria. The ZPLAN+ software was employed to calculate and compare the genetic gain, total cost, return and profit of each selection strategy. The first strategy reflected the current conventional breeding program, which was a progeny test system (CS). The second strategy was a selection scheme based strictly on genomic information (GS1). The third scenario was the same as GS1, but the selection by GEBV was further supplemented by the performance test (GS2). The last scenario was a mixture of genomic information and progeny tests (GS3). The results showed that the accuracy of the selection index of young boars of GS1 was 26% higher than that of CS. On the other hand, both GS2 and GS3 gave 31% higher accuracy than CS for young boars. The annual monetary genetic gain of GS1, GS2 and GS3 was 10%, 12%, and 11% higher, respectively, than that of CS. As expected, the discounted costs of genomic selection strategies were higher than those of CS. The costs of GS1, GS2 and GS3 were 35%, 73%, and 89% higher than those of CS, respectively, assuming a genotyping cost of $120. As a result, the discounted profit per animal of GS1 and GS2 was 8% and 2% higher, respectively, than that of CS while GS3 was 6% lower. Comparison among genomic breeding scenarios revealed that GS1 was more profitable than GS2 and GS3. The genomic selection schemes, especially GS1 and GS2, were clearly superior to the conventional scheme in terms of monetary genetic gain and profit. PMID:26954222

  18. A Tree Based Broadcast Scheme for (m, k)-firm Real-Time Stream in Wireless Sensor Networks.

    PubMed

    Park, HoSung; Kim, Beom-Su; Kim, Kyong Hoon; Shah, Babar; Kim, Ki-Il

    2017-11-09

    Recently, various unicast routing protocols have been proposed to deliver measured data from the sensor node to the sink node within the predetermined deadline in wireless sensor networks. In parallel with their approaches, some applications demand the specific service, which is based on broadcast to all nodes within the deadline, the feasible real-time traffic model and improvements in energy efficiency. However, current protocols based on either flooding or one-to-one unicast cannot meet the above requirements entirely. Moreover, as far as the authors know, there is no study for the real-time broadcast protocol to support the application-specific traffic model in WSN yet. Based on the above analysis, in this paper, we propose a new ( m , k )-firm-based Real-time Broadcast Protocol (FRBP) by constructing a broadcast tree to satisfy the ( m , k )-firm, which is applicable to the real-time model in resource-constrained WSNs. The broadcast tree in FRBP is constructed by the distance-based priority scheme, whereas energy efficiency is improved by selecting as few as nodes on a tree possible. To overcome the unstable network environment, the recovery scheme invokes rapid partial tree reconstruction in order to designate another node as the parent on a tree according to the measured ( m , k )-firm real-time condition and local states monitoring. Finally, simulation results are given to demonstrate the superiority of FRBP compared to the existing schemes in terms of average deadline missing ratio, average throughput and energy consumption.

  19. Active learning for noisy oracle via density power divergence.

    PubMed

    Sogawa, Yasuhiro; Ueno, Tsuyoshi; Kawahara, Yoshinobu; Washio, Takashi

    2013-10-01

    The accuracy of active learning is critically influenced by the existence of noisy labels given by a noisy oracle. In this paper, we propose a novel pool-based active learning framework through robust measures based on density power divergence. By minimizing density power divergence, such as β-divergence and γ-divergence, one can estimate the model accurately even under the existence of noisy labels within data. Accordingly, we develop query selecting measures for pool-based active learning using these divergences. In addition, we propose an evaluation scheme for these measures based on asymptotic statistical analyses, which enables us to perform active learning by evaluating an estimation error directly. Experiments with benchmark datasets and real-world image datasets show that our active learning scheme performs better than several baseline methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Genetic and economic benefits of selection based on performance recording and genotyping in lower tiers of multi-tiered sheep breeding schemes.

    PubMed

    Santos, Bruno F S; van der Werf, Julius H J; Gibson, John P; Byrne, Timothy J; Amer, Peter R

    2017-01-17

    Performance recording and genotyping in the multiplier tier of multi-tiered sheep breeding schemes could potentially reduce the difference in the average genetic merit between nucleus and commercial flocks, and create additional economic benefits for the breeding structure. The genetic change in a multiple-trait breeding objective was predicted for various selection strategies that included performance recording, parentage testing and genomic selection. A deterministic simulation model was used to predict selection differentials and the flow of genetic superiority through the different tiers. Cumulative discounted economic benefits were calculated based on trait gains achieved in each of the tiers and considering the extra revenue and associated costs of applying recording, genotyping and selection practices in the multiplier tier of the breeding scheme. Performance recording combined with genomic or parentage information in the multiplier tier reduced the genetic lag between the nucleus and commercial flock by 2 to 3 years. The overall economic benefits of improved performance in the commercial tier offset the costs of recording the multiplier. However, it took more than 18 years before the cumulative net present value of benefits offset the costs at current test prices. Strategies in which recorded multiplier ewes were selected as replacements for the nucleus flock did modestly increase profitability when compared to a closed nucleus structure. Applying genomic selection is the most beneficial strategy if testing costs can be reduced or by genotyping only a proportion of the selection candidates. When the cost of genotyping was reduced, scenarios that combine performance recording with genomic selection were more profitable and reached breakeven point about 10 years earlier. Economic benefits can be generated in multiplier flocks by implementing performance recording in conjunction with either DNA pedigree recording or genomic technology. These recording practices reduce the long genetic lag between the nucleus and commercial flocks in multi-tiered breeding programs. Under current genotyping costs, the time to breakeven was found to be generally very long, although this varied between strategies. Strategies using either genomic selection or DNA pedigree verification were found to be economically viable provided the price paid for the tests is lower than current prices, in the long-term.

  1. Hybrid Packet-Pheromone-Based Probabilistic Routing for Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Kashkouli Nejad, Keyvan; Shawish, Ahmed; Jiang, Xiaohong; Horiguchi, Susumu

    Ad-Hoc networks are collections of mobile nodes communicating using wireless media without any fixed infrastructure. Minimal configuration and quick deployment make Ad-Hoc networks suitable for emergency situations like natural disasters or military conflicts. The current Ad-Hoc networks can only support either high mobility or high transmission rate at a time because they employ static approaches in their routing schemes. However, due to the continuous expansion of the Ad-Hoc network size, node-mobility and transmission rate, the development of new adaptive and dynamic routing schemes has become crucial. In this paper we propose a new routing scheme to support high transmission rates and high node-mobility simultaneously in a big Ad-Hoc network, by combining a new proposed packet-pheromone-based approach with the Hint Based Probabilistic Protocol (HBPP) for congestion avoidance with dynamic path selection in packet forwarding process. Because of using the available feedback information, the proposed algorithm does not introduce any additional overhead. The extensive simulation-based analysis conducted in this paper indicates that the proposed algorithm offers small packet-latency and achieves a significantly higher delivery probability in comparison with the available Hint-Based Probabilistic Protocol (HBPP).

  2. Hardware accelerator design for change detection in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Chaudhury, Santanu; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in Human Computer Interaction. In any remote surveillance scenario, smart cameras have to take intelligent decisions to select frames of significant changes to minimize communication and processing overhead. Among many of the algorithms for change detection, one based on clustering based scheme was proposed for smart camera systems. However, such an algorithm could achieve low frame rate far from real-time requirements on a general purpose processors (like PowerPC) available on FPGAs. This paper proposes the hardware accelerator capable of detecting real time changes in a scene, which uses clustering based change detection scheme. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA board. Resulted frame rate is 30 frames per second for QVGA resolution in gray scale.

  3. Parameter Tuning and Calibration of RegCM3 with MIT-Emanuel Cumulus Parameterization Scheme over CORDEX East Asian Domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Liwei; Qian, Yun; Zhou, Tianjun

    2014-10-01

    In this study, we calibrated the performance of regional climate model RegCM3 with Massachusetts Institute of Technology (MIT)-Emanuel cumulus parameterization scheme over CORDEX East Asia domain by tuning the selected seven parameters through multiple very fast simulated annealing (MVFSA) sampling method. The seven parameters were selected based on previous studies, which customized the RegCM3 with MIT-Emanuel scheme through three different ways by using the sensitivity experiments. The responses of model results to the seven parameters were investigated. Since the monthly total rainfall is constrained, the simulated spatial pattern of rainfall and the probability density function (PDF) distribution of daily rainfallmore » rates are significantly improved in the optimal simulation. Sensitivity analysis suggest that the parameter “relative humidity criteria” (RH), which has not been considered in the default simulation, has the largest effect on the model results. The responses of total rainfall over different regions to RH were examined. Positive responses of total rainfall to RH are found over northern equatorial western Pacific, which are contributed by the positive responses of explicit rainfall. Followed by an increase of RH, the increases of the low-level convergence and the associated increases in cloud water favor the increase of the explicit rainfall. The identified optimal parameters constrained by the total rainfall have positive effects on the low-level circulation and the surface air temperature. Furthermore, the optimized parameters based on the extreme case are suitable for a normal case and the model’s new version with mixed convection scheme.« less

  4. Assurance of energy efficiency and data security for ECG transmission in BASNs.

    PubMed

    Ma, Tao; Shrestha, Pradhumna Lal; Hempel, Michael; Peng, Dongming; Sharif, Hamid; Chen, Hsiao-Hwa

    2012-04-01

    With the technological advancement in body area sensor networks (BASNs), low cost high quality electrocardiographic (ECG) diagnosis systems have become important equipment for healthcare service providers. However, energy consumption and data security with ECG systems in BASNs are still two major challenges to tackle. In this study, we investigate the properties of compressed ECG data for energy saving as an effort to devise a selective encryption mechanism and a two-rate unequal error protection (UEP) scheme. The proposed selective encryption mechanism provides a simple and yet effective security solution for an ECG sensor-based communication platform, where only one percent of data is encrypted without compromising ECG data security. This part of the encrypted data is essential to ECG data quality due to its unequally important contribution to distortion reduction. The two-rate UEP scheme achieves a significant additional energy saving due to its unequal investment of communication energy to the outcomes of the selective encryption, and thus, it maintains a high ECG data transmission quality. Our results show the improvements in communication energy saving of about 40%, and demonstrate a higher transmission quality and security measured in terms of wavelet-based weighted percent root-mean-squared difference.

  5. Program scheme using common source lines in channel stacked NAND flash memory with layer selection by multilevel operation

    NASA Astrophysics Data System (ADS)

    Kim, Do-Bin; Kwon, Dae Woong; Kim, Seunghyun; Lee, Sang-Ho; Park, Byung-Gook

    2018-02-01

    To obtain high channel boosting potential and reduce a program disturbance in channel stacked NAND flash memory with layer selection by multilevel (LSM) operation, a new program scheme using boosted common source line (CSL) is proposed. The proposed scheme can be achieved by applying proper bias to each layer through its own CSL. Technology computer-aided design (TCAD) simulations are performed to verify the validity of the new method in LSM. Through TCAD simulation, it is revealed that the program disturbance characteristics is effectively improved by the proposed scheme.

  6. Selective epitaxial growth of monolithically integrated GaN-based light emitting diodes with AlGaN/GaN driving transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhaojun; Ma, Jun; Huang, Tongde

    2014-03-03

    In this Letter, we report selective epitaxial growth of monolithically integrated GaN-based light emitting diodes (LEDs) with AlGaN/GaN high-electron-mobility transistor (HEMT) drivers. A comparison of two integration schemes, selective epitaxial removal (SER), and selective epitaxial growth (SEG) was made. We found the SER resulted in serious degradation of the underlying LEDs in a HEMT-on-LED structure due to damage of the p-GaN surface. The problem was circumvented using the SEG that avoided plasma etching and minimized device degradation. The integrated HEMT-LEDs by SEG exhibited comparable characteristics as unintegrated devices and emitted modulated blue light by gate biasing.

  7. Infrared and visible image fusion scheme based on NSCT and low-level visual features

    NASA Astrophysics Data System (ADS)

    Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei

    2016-05-01

    Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.

  8. Barriers and facilitators to implementation, uptake and sustainability of community-based health insurance schemes in low- and middle-income countries: a systematic review.

    PubMed

    Fadlallah, Racha; El-Jardali, Fadi; Hemadi, Nour; Morsi, Rami Z; Abou Samra, Clara Abou; Ahmad, Ali; Arif, Khurram; Hishi, Lama; Honein-AbouHaidar, Gladys; Akl, Elie A

    2018-01-29

    Community-based health insurance (CBHI) has evolved as an alternative health financing mechanism to out of pocket payments in low- and middle-income countries (LMICs), particularly in areas where government or employer-based health insurance is minimal. This systematic review aimed to assess the barriers and facilitators to implementation, uptake and sustainability of CHBI schemes in LMICs. We searched six electronic databases and grey literature. We included both quantitative and qualitative studies written in English language and published after year 1992. Two reviewers worked in duplicate and independently to complete study selection, data abstraction, and assessment of methodological features. We synthesized the findings based on thematic analysis and categorized according to the ecological model into individual, interpersonal, community and systems levels. Of 15,510 citations, 51 met the eligibility criteria. Individual factors included awareness and understanding of the concept of CBHI, trust in scheme and scheme managers, perceived service quality, and demographic characteristics, which influenced enrollment and sustainability. Interpersonal factors such as household dynamics, other family members enrolled in the scheme, and social solidarity influenced enrollment and renewal of membership. Community-level factors such as culture and community involvement in scheme development influenced enrollment and sustainability of scheme. Systems-level factors encompassed governance, financial and delivery arrangement. Government involvement, accountability of scheme management, and strong policymaker-implementer relation facilitated implementation and sustainability of scheme. Packages that covered outpatient and inpatient care and those tailored to community needs contributed to increased enrollment. Amount and timing of premium collection was reported to negatively influence enrollment while factors reported as threats to sustainability included facility bankruptcy, operating on small budgets, rising healthcare costs, small risk pool, irregular contributions, and overutilization of services. At the delivery level, accessibility of facilities, facility environment, and health personnel influenced enrollment, service utilization and dropout rates. There are a multitude of interrelated factors at the individual, interpersonal, community and systems levels that drive the implementation, uptake and sustainability of CBHI schemes. We discuss the implications of the findings at the policy and research level. The review protocol is registered in PROSPERO International prospective register of systematic reviews (ID =  CRD42015019812 ).

  9. Performance Analysis of Relay Subset Selection for Amplify-and-Forward Cognitive Relay Networks

    PubMed Central

    Qureshi, Ijaz Mansoor; Malik, Aqdas Naveed; Zubair, Muhammad

    2014-01-01

    Cooperative communication is regarded as a key technology in wireless networks, including cognitive radio networks (CRNs), which increases the diversity order of the signal to combat the unfavorable effects of the fading channels, by allowing distributed terminals to collaborate through sophisticated signal processing. Underlay CRNs have strict interference constraints towards the secondary users (SUs) active in the frequency band of the primary users (PUs), which limits their transmit power and their coverage area. Relay selection offers a potential solution to the challenges faced by underlay networks, by selecting either single best relay or a subset of potential relay set under different design requirements and assumptions. The best relay selection schemes proposed in the literature for amplify-and-forward (AF) based underlay cognitive relay networks have been very well studied in terms of outage probability (OP) and bit error rate (BER), which is deficient in multiple relay selection schemes. The novelty of this work is to study the outage behavior of multiple relay selection in the underlay CRN and derive the closed-form expressions for the OP and BER through cumulative distribution function (CDF) of the SNR received at the destination. The effectiveness of relay subset selection is shown through simulation results. PMID:24737980

  10. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  11. Design of a vehicle based system to prevent ozone loss

    NASA Technical Reports Server (NTRS)

    Lynn, Sean R.; Bunker, Deborah; Hesbach, Thomas D., Jr.; Howerton, Everett B.; Hreinsson, G.; Mistr, E. Kirk; Palmer, Matthew E.; Rogers, Claiborne; Tischler, Dayna S.; Wrona, Daniel J.

    1993-01-01

    Reduced quantities of ozone in the atmosphere allow greater levels of ultraviolet light (UV) radiation to reach the earth's surface. This is known to cause skin cancer and mutations. Chlorine liberated from Chlorofluorocarbons (CFC's) and natural sources initiate the destruction of stratospheric ozone through a free radical chain reaction. The project goals are to understand the processes which contribute to stratospheric ozone loss, examine ways to prevent ozone loss, and design a vehicle-based system to carry out the prevention scheme. The 1992/1993 design objectives were to accomplish the first two goals and define the requirements for an implementation vehicle to be designed in detail starting next year. Many different ozone intervention schemes have been proposed though few have been researched and none have been tested. A scheme proposed by R.J. Cicerone, Scott Elliot and R.P.Turco late in 1991 was selected because of its research support and economic feasibility. This scheme uses hydrocarbon injected into the Antarctic ozone hole to form stable compounds with free chlorine, thus reducing ozone depletion. Because most polar ozone depletion takes place during a 3-4 week period each year, the hydrocarbon must be injected during this time window. A study of the hydrocarbon injection requirements determined that 100 aircraft traveling Mach 2.4 at a maximum altitude of 66,000 ft. would provide the most economic approach to preventing ozone loss. Each aircraft would require an 8,000 nm. range and be able to carry 35,000 lbs. of propane. The propane would be stored in a three-tank high pressure system. Missions would be based from airport regions located in South America and Australia. To best provide the requirements of mission analysis, an aircraft with L/D(sub cruise) = 10.5, SFC = 0.65 (the faculty advisor suggested that this number is too low) and a 250,000 lb TOGW was selected as a baseline. Modularity and multi-role functionality were selected to be key design features. Modularity provides ease of turnaround for the down-time critical mission. Multi-role functionality allows the aircraft to be used beyond its design mission, perhaps as an High Speed Civil Transport (HSCT) or for high altitude research.

  12. Visual privacy by context: proposal and evaluation of a level-based visualisation scheme.

    PubMed

    Padilla-López, José Ramón; Chaaraoui, Alexandros Andre; Gu, Feng; Flórez-Revuelta, Francisco

    2015-06-04

    Privacy in image and video data has become an important subject since cameras are being installed in an increasing number of public and private spaces. Specifically, in assisted living, intelligent monitoring based on computer vision can allow one to provide risk detection and support services that increase people's autonomy at home. In the present work, a level-based visualisation scheme is proposed to provide visual privacy when human intervention is necessary, such as at telerehabilitation and safety assessment applications. Visualisation levels are dynamically selected based on the previously modelled context. In this way, different levels of protection can be provided, maintaining the necessary intelligibility required for the applications. Furthermore, a case study of a living room, where a top-view camera is installed, is presented. Finally, the performed survey-based evaluation indicates the degree of protection provided by the different visualisation models, as well as the personal privacy preferences and valuations of the users.

  13. A robust cooperative spectrum sensing scheme based on Dempster-Shafer theory and trustworthiness degree calculation in cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Wang, Jinlong; Feng, Shuo; Wu, Qihui; Zheng, Xueqiang; Xu, Yuhua; Ding, Guoru

    2014-12-01

    Cognitive radio (CR) is a promising technology that brings about remarkable improvement in spectrum utilization. To tackle the hidden terminal problem, cooperative spectrum sensing (CSS) which benefits from the spatial diversity has been studied extensively. Since CSS is vulnerable to the attacks initiated by malicious secondary users (SUs), several secure CSS schemes based on Dempster-Shafer theory have been proposed. However, the existing works only utilize the current difference of SUs, such as the difference in SNR or similarity degree, to evaluate the trustworthiness of each SU. As the current difference is only one-sided and sometimes inaccurate, the statistical information contained in each SU's historical behavior should not be overlooked. In this article, we propose a robust CSS scheme based on Dempster-Shafer theory and trustworthiness degree calculation. It is carried out in four successive steps, which are basic probability assignment (BPA), trustworthiness degree calculation, selection and adjustment of BPA, and combination by Dempster-Shafer rule, respectively. Our proposed scheme evaluates the trustworthiness degree of SUs from both current difference aspect and historical behavior aspect and exploits Dempster-Shafer theory's potential to establish a `soft update' approach for the reputation value maintenance. It can not only differentiate malicious SUs from honest ones based on their historical behaviors but also reserve the current difference for each SU to achieve a better real-time performance. Abundant simulation results have validated that the proposed scheme outperforms the existing ones under the impact of different attack patterns and different number of malicious SUs.

  14. Diode-laser-based RIMS measurements of strontium-90

    NASA Astrophysics Data System (ADS)

    Bushaw, B. A.; Cannon, B. D.

    1998-12-01

    Double- and triple-resonance excitation schemes for the ionization of strontium are presented. Use of single-mode diode lasers for the resonance excitations provides a high degree of optical isotopic selectivity: with double-resonance, selectivity of >104 for 90Sr against the stable Sr isotopes has been demonstrated. Measurement of lineshapes and stable isotope shifts in the triple-resonance process indicate that optical selectivity should increase to ˜109. When combined with mass spectrometer selectivity this is sufficient for measurement of 90Sr at background environmental levels. Additionally, autoionizing resonances have been investigated for improving ionization efficiency with lower power lasers.

  15. Automatic classification of protein structures using physicochemical parameters.

    PubMed

    Mohan, Abhilash; Rao, M Divya; Sunderrajan, Shruthi; Pennathur, Gautam

    2014-09-01

    Protein classification is the first step to functional annotation; SCOP and Pfam databases are currently the most relevant protein classification schemes. However, the disproportion in the number of three dimensional (3D) protein structures generated versus their classification into relevant superfamilies/families emphasizes the need for automated classification schemes. Predicting function of novel proteins based on sequence information alone has proven to be a major challenge. The present study focuses on the use of physicochemical parameters in conjunction with machine learning algorithms (Naive Bayes, Decision Trees, Random Forest and Support Vector Machines) to classify proteins into their respective SCOP superfamily/Pfam family, using sequence derived information. Spectrophores™, a 1D descriptor of the 3D molecular field surrounding a structure was used as a benchmark to compare the performance of the physicochemical parameters. The machine learning algorithms were modified to select features based on information gain for each SCOP superfamily/Pfam family. The effect of combining physicochemical parameters and spectrophores on classification accuracy (CA) was studied. Machine learning algorithms trained with the physicochemical parameters consistently classified SCOP superfamilies and Pfam families with a classification accuracy above 90%, while spectrophores performed with a CA of around 85%. Feature selection improved classification accuracy for both physicochemical parameters and spectrophores based machine learning algorithms. Combining both attributes resulted in a marginal loss of performance. Physicochemical parameters were able to classify proteins from both schemes with classification accuracy ranging from 90-96%. These results suggest the usefulness of this method in classifying proteins from amino acid sequences.

  16. Design and evaluation of sparse quantization index modulation watermarking schemes

    NASA Astrophysics Data System (ADS)

    Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter

    2008-08-01

    In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).

  17. Break-even cost of cloning in genetic improvement of dairy cattle.

    PubMed

    Dematawewa, C M; Berger, P J

    1998-04-01

    Twelve different models for alternative progeny-testing schemes based on genetic and economic gains were compared. The first 10 alternatives were considered to be optimally operating progeny-testing schemes. Alternatives 1 to 5 considered the following combinations of technologies: 1) artificial insemination, 2) artificial insemination with sexed semen, 3) artificial insemination with embryo transfer, 4) artificial insemination and embryo transfer with few bulls as sires, and 5) artificial insemination, embryo transfer, and sexed semen with few bulls, respectively. Alternatives 6 to 12 considered cloning from dams. Alternatives 11 and 12 considered a regular progeny-testing scheme that had selection gains (intensity x accuracy x genetic standard deviation) of 890, 300, 600, and 89 kg, respectively, for the four paths. The sums of the generation intervals of the four paths were 19 yr for the first 8 alternatives and 19.5, 22, 29, and 29.5 yr for alternatives 9 to 12, respectively. Rates of genetic gain in milk yield for alternatives 1 to 5 were 257, 281, 316, 327, and 340 kg/yr, respectively. The rate of gain for other alternatives increased as number of clones increased. The use of three records per clone increased both accuracy and generation interval of a path. Cloning was highly beneficial for progeny-testing schemes with lower intensity and accuracy of selection. The discounted economic gain (break-even cost) per clone was the highest ($84) at current selection levels using sexed semen and three records on clones of the dam. The total cost associated with cloning has to be below $84 for cloning to be an economically viable option.

  18. Selective determination of arginine-containing and tyrosine-containing peptides using capillary electrophoresis and laser-induced fluorescence detection.

    PubMed

    Cobb, K A; Novotny, M V

    1992-01-01

    The use of two different amino acid-selective fluorogenic reagents for the derivatization of peptides is investigated. One such scheme utilizes a selective reaction of benzoin with the guanidine moiety to derivatize arginine residues occurring in a peptide. The second scheme involves the formylation of tyrosine, followed by reaction with 4-methoxy-1,2-phenylenediamine. The use of capillary electrophoresis and laser-induced fluorescence detection allows enhanced efficiencies and sensitivities to be obtained for the separations of either arginine- or tyrosine-containing peptides. A helium-cadmium laser (325 nm) is ideally suited for the laser-based detection system due to a close match of the excitation maxima of derivatized peptides from both reactions. A detection limit of 270 amol is achieved for model arginine-containing peptides, while the detection limit for model tyrosine-containing peptides is measured at 390 amol. Both derivatization reactions are found to be useful for high-sensitivity peptide mapping applications in which only the peptides containing the derivatized amino acids are detected.

  19. Acceleration of the chemistry solver for modeling DI engine combustion using dynamic adaptive chemistry (DAC) schemes

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Liang, Long; Ge, Hai-Wen; Reitz, Rolf D.

    2010-03-01

    Acceleration of the chemistry solver for engine combustion is of much interest due to the fact that in practical engine simulations extensive computational time is spent solving the fuel oxidation and emission formation chemistry. A dynamic adaptive chemistry (DAC) scheme based on a directed relation graph error propagation (DRGEP) method has been applied to study homogeneous charge compression ignition (HCCI) engine combustion with detailed chemistry (over 500 species) previously using an R-value-based breadth-first search (RBFS) algorithm, which significantly reduced computational times (by as much as 30-fold). The present paper extends the use of this on-the-fly kinetic mechanism reduction scheme to model combustion in direct-injection (DI) engines. It was found that the DAC scheme becomes less efficient when applied to DI engine simulations using a kinetic mechanism of relatively small size and the accuracy of the original DAC scheme decreases for conventional non-premixed combustion engine. The present study also focuses on determination of search-initiating species, involvement of the NOx chemistry, selection of a proper error tolerance, as well as treatment of the interaction of chemical heat release and the fuel spray. Both the DAC schemes were integrated into the ERC KIVA-3v2 code, and simulations were conducted to compare the two schemes. In general, the present DAC scheme has better efficiency and similar accuracy compared to the previous DAC scheme. The efficiency depends on the size of the chemical kinetics mechanism used and the engine operating conditions. For cases using a small n-heptane kinetic mechanism of 34 species, 30% of the computational time is saved, and 50% for a larger n-heptane kinetic mechanism of 61 species. The paper also demonstrates that by combining the present DAC scheme with an adaptive multi-grid chemistry (AMC) solver, it is feasible to simulate a direct-injection engine using a detailed n-heptane mechanism with 543 species with practical computer time.

  20. Economic evaluation of progeny-testing and genomic selection schemes for small-sized nucleus dairy cattle breeding programs in developing countries.

    PubMed

    Kariuki, C M; Brascamp, E W; Komen, H; Kahi, A K; van Arendonk, J A M

    2017-03-01

    In developing countries minimal and erratic performance and pedigree recording impede implementation of large-sized breeding programs. Small-sized nucleus programs offer an alternative but rely on their economic performance for their viability. We investigated the economic performance of 2 alternative small-sized dairy nucleus programs [i.e., progeny testing (PT) and genomic selection (GS)] over a 20-yr investment period. The nucleus was made up of 453 male and 360 female animals distributed in 8 non-overlapping age classes. Each year 10 active sires and 100 elite dams were selected. Populations of commercial recorded cows (CRC) of sizes 12,592 and 25,184 were used to produce test daughters in PT or to create a reference population in GS, respectively. Economic performance was defined as gross margins, calculated as discounted revenues minus discounted costs following a single generation of selection. Revenues were calculated as cumulative discounted expressions (CDE, kg) × 0.32 (€/kg of milk) × 100,000 (size commercial population). Genetic superiorities, deterministically simulated using pseudo-BLUP index and CDE, were determined using gene flow. Costs were for one generation of selection. Results show that GS schemes had higher cumulated genetic gain in the commercial cow population and higher gross margins compared with PT schemes. Gross margins were between 3.2- and 5.2-fold higher for GS, depending on size of the CRC population. The increase in gross margin was mostly due to a decreased generation interval and lower running costs in GS schemes. In PT schemes many bulls are culled before selection. We therefore also compared 2 schemes in which semen was stored instead of keeping live bulls. As expected, semen storage resulted in an increase in gross margins in PT schemes, but gross margins remained lower than those of GS schemes. We conclude that implementation of small-sized GS breeding schemes can be economically viable for developing countries. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

  1. The Selection of Computed Tomography Scanning Schemes for Lengthy Symmetric Objects

    NASA Astrophysics Data System (ADS)

    Trinh, V. B.; Zhong, Y.; Osipov, S. P.

    2017-04-01

    . The article describes the basic computed tomography scan schemes for lengthy symmetric objects: continuous (discrete) rotation with a discrete linear movement; continuous (discrete) rotation with discrete linear movement to acquire 2D projection; continuous (discrete) linear movement with discrete rotation to acquire one-dimensional projection and continuous (discrete) rotation to acquire of 2D projection. The general method to calculate the scanning time is discussed in detail. It should be extracted the comparison principle to select a scanning scheme. This is because data are the same for all scanning schemes: the maximum energy of the X-ray radiation; the power of X-ray radiation source; the angle of the X-ray cone beam; the transverse dimension of a single detector; specified resolution and the maximum time, which is need to form one point of the original image and complies the number of registered photons). It demonstrates the possibilities of the above proposed method to compare the scanning schemes. Scanning object was a cylindrical object with the mass thickness is 4 g/cm2, the effective atomic number is 15 and length is 1300 mm. It analyzes data of scanning time and concludes about the efficiency of scanning schemes. It examines the productivity of all schemes and selects the effective one.

  2. A Novel Piggyback Selection Scheme in IEEE 802.11e HCCA

    NASA Astrophysics Data System (ADS)

    Lee, Hyun-Jin; Kim, Jae-Hyun

    A control frame can be piggybacked onto a data frame to increase channel efficiency in wireless communication. However, if the control frame including global control information is piggybacked, the delay of the data frame from a access point will be increased even though there is only one station with low physical transmission rate. It is similar to the anomaly phenomenon in a network which supports multi-rate transmission. In this letter, we define this phenomenon as “the piggyback problem at low physical transmission rate” and evaluate the effect of this problem with respect to physical transmission rate and normalized traffic load. Then, we propose a delay-based piggyback scheme. Simulations show that the proposed scheme reduces average frame transmission delay and improves channel utilization about 24% and 25%, respectively.

  3. Data acquisition and path selection decision making for an autonomous roving vehicle. [laser pointing control system for vehicle guidance

    NASA Technical Reports Server (NTRS)

    Shen, C. N.; YERAZUNIS

    1979-01-01

    The feasibility of using range/pointing angle data such as might be obtained by a laser rangefinder for the purpose of terrain evaluation in the 10-40 meter range on which to base the guidance of an autonomous rover was investigated. The decision procedure of the rapid estimation scheme for the detection of discrete obstacles has been modified to reinforce the detection ability. With the introduction of the logarithmic scanning scheme and obstacle identification scheme, previously developed algorithms are combined to demonstrate the overall performance of the intergrated route designation system using laser rangefinder. In an attempt to cover a greater range, 30 m to 100 mm, the problem estimating gradients in the presence of positioning angle noise at middle range is investigated.

  4. Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system.

    PubMed

    Xu, Hui; Tong, Xiao-Jun; Zhang, Miao; Wang, Zhu; Li, Ling-Hao

    2016-06-01

    Video encryption schemes mostly employ the selective encryption method to encrypt parts of important and sensitive video information, aiming to ensure the real-time performance and encryption efficiency. The classic block cipher is not applicable to video encryption due to the high computational overhead. In this paper, we propose the encryption selection control module to encrypt video syntax elements dynamically which is controlled by the chaotic pseudorandom sequence. A novel spatiotemporal chaos system and binarization method is used to generate a key stream for encrypting the chosen syntax elements. The proposed scheme enhances the resistance against attacks through the dynamic encryption process and high-security stream cipher. Experimental results show that the proposed method exhibits high security and high efficiency with little effect on the compression ratio and time cost.

  5. The impact of initialization procedures on unsupervised unmixing of hyperspectral imagery using the constrained positive matrix factorization

    NASA Astrophysics Data System (ADS)

    Masalmah, Yahya M.; Vélez-Reyes, Miguel

    2007-04-01

    The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.

  6. Vibrational quasi-degenerate perturbation theory with optimized coordinates: applications to ethylene and trans-1,3-butadiene.

    PubMed

    Yagi, Kiyoshi; Otaki, Hiroki

    2014-02-28

    A perturbative extension to optimized coordinate vibrational self-consistent field (oc-VSCF) is proposed based on the quasi-degenerate perturbation theory (QDPT). A scheme to construct the degenerate space (P space) is developed, which incorporates degenerate configurations and alleviates the divergence of perturbative expansion due to localized coordinates in oc-VSCF (e.g., local O-H stretching modes of water). An efficient configuration selection scheme is also implemented, which screens out the Hamiltonian matrix element between the P space configuration (p) and the complementary Q space configuration (q) based on a difference in their quantum numbers (λpq = ∑s|ps - qs|). It is demonstrated that the second-order vibrational QDPT based on optimized coordinates (oc-VQDPT2) smoothly converges with respect to the order of the mode coupling, and outperforms the conventional one based on normal coordinates. Furthermore, an improved, fast algorithm is developed for optimizing the coordinates. First, the minimization of the VSCF energy is conducted in a restricted parameter space, in which only a portion of pairs of coordinates is selectively transformed. A rational index is devised for this purpose, which identifies the important coordinate pairs to mix from others that may remain unchanged based on the magnitude of harmonic coupling induced by the transformation. Second, a cubic force field (CFF) is employed in place of a quartic force field, which bypasses intensive procedures that arise due to the presence of the fourth-order force constants. It is found that oc-VSCF based on CFF together with the pair selection scheme yields the coordinates similar in character to the conventional ones such that the final vibrational energy is affected very little while gaining an order of magnitude acceleration. The proposed method is applied to ethylene and trans-1,3-butadiene. An accurate, multi-resolution potential, which combines the MP2 and coupled-cluster with singles, doubles, and perturbative triples level of electronic structure theory, is generated and employed in the oc-VQDPT2 calculation to obtain the fundamental tones as well as selected overtones/combination tones coupled to the fundamentals through the Fermi resonance. The calculated frequencies of ethylene and trans-1,3-butadiene are found to be in excellent agreement with the experimental values with a mean absolute error of 8 and 9 cm(-1), respectively.

  7. Speed Sensorless Induction Motor Drives for Electrical Actuators: Schemes, Trends and Tradeoffs

    NASA Technical Reports Server (NTRS)

    Elbuluk, Malik E.; Kankam, M. David

    1997-01-01

    For a decade, induction motor drive-based electrical actuators have been under investigation as potential replacement for the conventional hydraulic and pneumatic actuators in aircraft. Advantages of electric actuator include lower weight and size, reduced maintenance and operating costs, improved safety due to the elimination of hazardous fluids and high pressure hydraulic and pneumatic actuators, and increased efficiency. Recently, the emphasis of research on induction motor drives has been on sensorless vector control which eliminates flux and speed sensors mounted on the motor. Also, the development of effective speed and flux estimators has allowed good rotor flux-oriented (RFO) performance at all speeds except those close to zero. Sensorless control has improved the motor performance, compared to the Volts/Hertz (or constant flux) controls. This report evaluates documented schemes for speed sensorless drives, and discusses the trends and tradeoffs involved in selecting a particular scheme. These schemes combine the attributes of the direct and indirect field-oriented control (FOC) or use model adaptive reference systems (MRAS) with a speed-dependent current model for flux estimation which tracks the voltage model-based flux estimator. Many factors are important in comparing the effectiveness of a speed sensorless scheme. Among them are the wide speed range capability, motor parameter insensitivity and noise reduction. Although a number of schemes have been proposed for solving the speed estimation, zero-speed FOC with robustness against parameter variations still remains an area of research for speed sensorless control.

  8. Playing quantum games by a scheme with pre- and post-selection

    NASA Astrophysics Data System (ADS)

    Weng, Guo-Fu; Yu, Yang

    2016-01-01

    We propose a scheme to play quantum games by assuming that the two players interact with each other. Thus, by pre-selection, two players can choose their initial states, and some dilemma in classical game may be removed by post-selection, which is particularly useful for the cooperative games. We apply the proposal to both of BoS and Prisoners' dilemma games in cooperative situations. The examples show that the proposal would guarantee a remarkably binding agreement between two parties. Any deviation during the game will be detected, and the game may be abnegated. By illuminating the examples, we find that the initial state in the cooperative game does not destroy process to get preferable payoffs by pre- and post-selections, which is not true in other schemes for implementing the quantum game. We point out that one player can use the scheme to detect his opponent's choices if he is advantageous in information theory and technology.

  9. Experiments in encoding multilevel images as quadtrees

    NASA Technical Reports Server (NTRS)

    Lansing, Donald L.

    1987-01-01

    Image storage requirements for several encoding methods are investigated and the use of quadtrees with multigray level or multicolor images are explored. The results of encoding a variety of images having up to 256 gray levels using three schemes (full raster, runlength and quadtree) are presented. Although there is considerable literature on the use of quadtrees to store and manipulate binary images, their application to multilevel images is relatively undeveloped. The potential advantage of quadtree encoding is that an entire area with a uniform gray level may be encoded as a unit. A pointerless quadtree encoding scheme is described. Data are presented on the size of the quadtree required to encode selected images and on the relative storage requirements of the three encoding schemes. A segmentation scheme based on the statistical variation of gray levels within a quadtree quadrant is described. This parametric scheme may be used to control the storage required by an encoded image and to preprocess a scene for feature identification. Several sets of black and white and pseudocolor images obtained by varying the segmentation parameter are shown.

  10. Exploring Sampling in the Detection of Multicategory EEG Signals

    PubMed Central

    Siuly, Siuly; Kabir, Enamul; Wang, Hua; Zhang, Yanchun

    2015-01-01

    The paper presents a structure based on samplings and machine leaning techniques for the detection of multicategory EEG signals where random sampling (RS) and optimal allocation sampling (OS) are explored. In the proposed framework, before using the RS and OS scheme, the entire EEG signals of each class are partitioned into several groups based on a particular time period. The RS and OS schemes are used in order to have representative observations from each group of each category of EEG data. Then all of the selected samples by the RS from the groups of each category are combined in a one set named RS set. In the similar way, for the OS scheme, an OS set is obtained. Then eleven statistical features are extracted from the RS and OS set, separately. Finally this study employs three well-known classifiers: k-nearest neighbor (k-NN), multinomial logistic regression with a ridge estimator (MLR), and support vector machine (SVM) to evaluate the performance for the RS and OS feature set. The experimental outcomes demonstrate that the RS scheme well represents the EEG signals and the k-NN with the RS is the optimum choice for detection of multicategory EEG signals. PMID:25977705

  11. Forage and breed effects on behavior and temperament of pregnant beef heifers

    USDA-ARS?s Scientific Manuscript database

    Integration of behavioral observations with traditional selection schemes may lead to enhanced animal well-being and more profitable forage-based cattle production systems. Brahman-influenced (BR; n=64) and Gelbvieh x Angus (GA; n=64) heifers consumed either toxic endophyte-infected tall fescue (E+)...

  12. Differences exist across insurance schemes in China post-consolidation.

    PubMed

    Li, Yang; Zhao, Yinjun; Yi, Danhui; Wang, Xiaojun; Jiang, Yan; Wang, Yu; Liu, Xinchun; Ma, Shuangge

    2017-01-01

    In China, the basic insurance system consists of three schemes: the UEBMI (Urban Employee Basic Medical Insurance), URBMI (Urban Resident Basic Medical Insurance), and NCMS (New Cooperative Medical Scheme), across which significant differences have been observed. Since 2009, the central government has been experimenting with consolidating these schemes in selected areas. This study examines whether differences still exist across schemes after the consolidation. A survey was conducted in the city of Suzhou, collecting data on subjects 45 years old and above with at least one inpatient or outpatient treatment during a period of twelve months. Analysis on 583 subjects was performed comparing subjects' characteristics across insurance schemes. A resampling-based method was applied to compute the predicted gross medical cost, OOP (out-of-pocket) cost, and insurance reimbursement rate. Subjects under different insurance schemes differ in multiple aspects. For inpatient treatments, subjects under the URBMI have the highest observed and predicted gross and OOP costs, while those under the UEBMI have the lowest. For outpatient treatments, subjects under the UEBMI and URBMI have comparable costs, while those under the NCMS have much lower costs. Subjects under the NCMS also have a much lower reimbursement rate. Differences still exist across schemes in medical costs and insurance reimbursement rate post-consolidation. Further investigations are needed to identify the causes, and interventions are needed to eliminate such differences.

  13. Differences exist across insurance schemes in China post-consolidation

    PubMed Central

    Yi, Danhui; Wang, Xiaojun; Jiang, Yan; Wang, Yu; Liu, Xinchun

    2017-01-01

    Background In China, the basic insurance system consists of three schemes: the UEBMI (Urban Employee Basic Medical Insurance), URBMI (Urban Resident Basic Medical Insurance), and NCMS (New Cooperative Medical Scheme), across which significant differences have been observed. Since 2009, the central government has been experimenting with consolidating these schemes in selected areas. This study examines whether differences still exist across schemes after the consolidation. Methods A survey was conducted in the city of Suzhou, collecting data on subjects 45 years old and above with at least one inpatient or outpatient treatment during a period of twelve months. Analysis on 583 subjects was performed comparing subjects’ characteristics across insurance schemes. A resampling-based method was applied to compute the predicted gross medical cost, OOP (out-of-pocket) cost, and insurance reimbursement rate. Results Subjects under different insurance schemes differ in multiple aspects. For inpatient treatments, subjects under the URBMI have the highest observed and predicted gross and OOP costs, while those under the UEBMI have the lowest. For outpatient treatments, subjects under the UEBMI and URBMI have comparable costs, while those under the NCMS have much lower costs. Subjects under the NCMS also have a much lower reimbursement rate. Conclusions Differences still exist across schemes in medical costs and insurance reimbursement rate post-consolidation. Further investigations are needed to identify the causes, and interventions are needed to eliminate such differences. PMID:29125837

  14. GoDisco: Selective Gossip Based Dissemination of Information in Social Community Based Overlays

    NASA Astrophysics Data System (ADS)

    Datta, Anwitaman; Sharma, Rajesh

    We propose and investigate a gossip based, social principles and behavior inspired decentralized mechanism (GoDisco) to disseminate information in online social community networks, using exclusively social links and exploiting semantic context to keep the dissemination process selective to relevant nodes. Such a designed dissemination scheme using gossiping over a egocentric social network is unique and is arguably a concept whose time has arrived, emulating word of mouth behavior and can have interesting applications like probabilistic publish/subscribe, decentralized recommendation and contextual advertisement systems, to name a few. Simulation based experiments show that despite using only local knowledge and contacts, the system has good global coverage and behavior.

  15. Agent-based power sharing scheme for active hybrid power sources

    NASA Astrophysics Data System (ADS)

    Jiang, Zhenhua

    The active hybridization technique provides an effective approach to combining the best properties of a heterogeneous set of power sources to achieve higher energy density, power density and fuel efficiency. Active hybrid power sources can be used to power hybrid electric vehicles with selected combinations of internal combustion engines, fuel cells, batteries, and/or supercapacitors. They can be deployed in all-electric ships to build a distributed electric power system. They can also be used in a bulk power system to construct an autonomous distributed energy system. An important aspect in designing an active hybrid power source is to find a suitable control strategy that can manage the active power sharing and take advantage of the inherent scalability and robustness benefits of the hybrid system. This paper presents an agent-based power sharing scheme for active hybrid power sources. To demonstrate the effectiveness of the proposed agent-based power sharing scheme, simulation studies are performed for a hybrid power source that can be used in a solar car as the main propulsion power module. Simulation results clearly indicate that the agent-based control framework is effective to coordinate the various energy sources and manage the power/voltage profiles.

  16. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  17. A data base and analysis program for shuttle main engine dynamic pressure measurements. Appendix F: Data base plots for SSME tests 750-120 through 750-200

    NASA Technical Reports Server (NTRS)

    Coffin, T.

    1986-01-01

    A dynamic pressure data base and data base management system developed to characterize the Space Shuttle Main Engine (SSME) dynamic pressure environment is presented. The data base represents dynamic pressure measurements obtained during single engine hot firing tests of the SSME. Software is provided to permit statistical evaluation of selected measurements under specified operating conditions. An interpolation scheme is also included to estimate spectral trends with SSME power level.

  18. Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2015-11-01

    The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.

  19. A spatio-temporal evaluation of the WRF physical parameterisations for numerical rainfall simulation in semi-humid and semi-arid catchments of Northern China

    NASA Astrophysics Data System (ADS)

    Tian, Jiyang; Liu, Jia; Wang, Jianhua; Li, Chuanzhe; Yu, Fuliang; Chu, Zhigang

    2017-07-01

    Mesoscale Numerical Weather Prediction systems can provide rainfall products at high resolutions in space and time, playing an increasingly more important role in water management and flood forecasting. The Weather Research and Forecasting (WRF) model is one of the most popular mesoscale systems and has been extensively used in research and practice. However, for hydrologists, an unsolved question must be addressed before each model application in a different target area. That is, how are the most appropriate combinations of physical parameterisations from the vast WRF library selected to provide the best downscaled rainfall? In this study, the WRF model was applied with 12 designed parameterisation schemes with different combinations of physical parameterisations, including microphysics, radiation, planetary boundary layer (PBL), land-surface model (LSM) and cumulus parameterisations. The selected study areas are two semi-humid and semi-arid catchments located in the Daqinghe River basin, Northern China. The performance of WRF with different parameterisation schemes is tested for simulating eight typical 24-h storm events with different evenness in space and time. In addition to the cumulative rainfall amount, the spatial and temporal patterns of the simulated rainfall are evaluated based on a two-dimensional composed verification statistic. Among the 12 parameterisation schemes, Scheme 4 outperforms the other schemes with the best average performance in simulating rainfall totals and temporal patterns; in contrast, Scheme 6 is generally a good choice for simulations of spatial rainfall distributions. Regarding the individual parameterisations, Single-Moment 6 (WSM6), Yonsei University (YSU), Kain-Fritsch (KF) and Grell-Devenyi (GD) are better choices for microphysics, planetary boundary layers (PBL) and cumulus parameterisations, respectively, in the study area. These findings provide helpful information for WRF rainfall downscaling in semi-humid and semi-arid areas. The methodologies to design and test the combination schemes of parameterisations can also be regarded as a reference for generating ensembles in numerical rainfall predictions using the WRF model.

  20. Computation-aware algorithm selection approach for interlaced-to-progressive conversion

    NASA Astrophysics Data System (ADS)

    Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang

    2010-05-01

    We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.

  1. A Tree Based Broadcast Scheme for (m, k)-firm Real-Time Stream in Wireless Sensor Networks

    PubMed Central

    Park, HoSung; Kim, Beom-Su; Kim, Kyong Hoon; Shah, Babar; Kim, Ki-Il

    2017-01-01

    Recently, various unicast routing protocols have been proposed to deliver measured data from the sensor node to the sink node within the predetermined deadline in wireless sensor networks. In parallel with their approaches, some applications demand the specific service, which is based on broadcast to all nodes within the deadline, the feasible real-time traffic model and improvements in energy efficiency. However, current protocols based on either flooding or one-to-one unicast cannot meet the above requirements entirely. Moreover, as far as the authors know, there is no study for the real-time broadcast protocol to support the application-specific traffic model in WSN yet. Based on the above analysis, in this paper, we propose a new (m, k)-firm-based Real-time Broadcast Protocol (FRBP) by constructing a broadcast tree to satisfy the (m, k)-firm, which is applicable to the real-time model in resource-constrained WSNs. The broadcast tree in FRBP is constructed by the distance-based priority scheme, whereas energy efficiency is improved by selecting as few as nodes on a tree possible. To overcome the unstable network environment, the recovery scheme invokes rapid partial tree reconstruction in order to designate another node as the parent on a tree according to the measured (m, k)-firm real-time condition and local states monitoring. Finally, simulation results are given to demonstrate the superiority of FRBP compared to the existing schemes in terms of average deadline missing ratio, average throughput and energy consumption. PMID:29120404

  2. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection.

    PubMed

    Zhuang, Xiahai; Bai, Wenjia; Song, Jingjing; Zhan, Songhua; Qian, Xiaohua; Shi, Wenzhe; Lian, Yanyun; Rueckert, Daniel

    2015-07-01

    Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors' proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve with higher WHS Dice scores compared to the conventional schemes (p < 0.03). In the atlas database study, the authors showed that the MAS using larger atlas databases generated better performance curves than the MAS using smaller ones, indicating larger atlas databases could produce more accurate segmentation. The authors have developed a new MAS framework for automatic WHS of CTA and investigated alternative implementations of MAS. With the proposed atlas ranking algorithm and joint label fusion, the MAS scheme is able to generate accurate segmentation within practically acceptable computation time. This method can be useful for the development of new clinical applications of cardiac CT.

  3. Code-Time Diversity for Direct Sequence Spread Spectrum Systems

    PubMed Central

    Hassan, A. Y.

    2014-01-01

    Time diversity is achieved in direct sequence spread spectrum by receiving different faded delayed copies of the transmitted symbols from different uncorrelated channel paths when the transmission signal bandwidth is greater than the coherence bandwidth of the channel. In this paper, a new time diversity scheme is proposed for spread spectrum systems. It is called code-time diversity. In this new scheme, N spreading codes are used to transmit one data symbol over N successive symbols interval. The diversity order in the proposed scheme equals to the number of the used spreading codes N multiplied by the number of the uncorrelated paths of the channel L. The paper represents the transmitted signal model. Two demodulators structures will be proposed based on the received signal models from Rayleigh flat and frequency selective fading channels. Probability of error in the proposed diversity scheme is also calculated for the same two fading channels. Finally, simulation results are represented and compared with that of maximal ration combiner (MRC) and multiple-input and multiple-output (MIMO) systems. PMID:24982925

  4. Predominant information quality scheme for the essential amino acids: an information-theoretical analysis.

    PubMed

    Esquivel, Rodolfo O; Molina-Espíritu, Moyocoyani; López-Rosa, Sheila; Soriano-Correa, Catalina; Barrientos-Salcedo, Carolina; Kohout, Miroslav; Dehesa, Jesús S

    2015-08-24

    In this work we undertake a pioneer information-theoretical analysis of 18 selected amino acids extracted from a natural protein, bacteriorhodopsin (1C3W). The conformational structures of each amino acid are analyzed by use of various quantum chemistry methodologies at high levels of theory: HF, M062X and CISD(Full). The Shannon entropy, Fisher information and disequilibrium are determined to grasp the spatial spreading features of delocalizability, order and uniformity of the optimized structures. These three entropic measures uniquely characterize all amino acids through a predominant information-theoretic quality scheme (PIQS), which gathers all chemical families by means of three major spreading features: delocalization, narrowness and uniformity. This scheme recognizes four major chemical families: aliphatic (delocalized), aromatic (delocalized), electro-attractive (narrowed) and tiny (uniform). All chemical families recognized by the existing energy-based classifications are embraced by this entropic scheme. Finally, novel chemical patterns are shown in the information planes associated with the PIQS entropic measures. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. An adaptive morphological gradient lifting wavelet for detecting bearing defects

    NASA Astrophysics Data System (ADS)

    Li, Bing; Zhang, Pei-lin; Mi, Shuang-shan; Hu, Ren-xi; Liu, Dong-sheng

    2012-05-01

    This paper presents a novel wavelet decomposition scheme, named adaptive morphological gradient lifting wavelet (AMGLW), for detecting bearing defects. The adaptability of the AMGLW consists in that the scheme can select between two filters, mean the average filter and morphological gradient filter, to update the approximation signal based on the local gradient of the analyzed signal. Both a simulated signal and vibration signals acquired from bearing are employed to evaluate and compare the proposed AMGLW scheme with the traditional linear wavelet transform (LWT) and another adaptive lifting wavelet (ALW) developed in literature. Experimental results reveal that the AMGLW outperforms the LW and ALW obviously for detecting bearing defects. The impulsive components can be enhanced and the noise can be depressed simultaneously by the presented AMGLW scheme. Thus the fault characteristic frequencies of bearing can be clearly identified. Furthermore, the AMGLW gets an advantage over LW in computation efficiency. It is quite suitable for online condition monitoring of bearings and other rotating machineries.

  6. HIV-1 protease cleavage site prediction based on two-stage feature selection method.

    PubMed

    Niu, Bing; Yuan, Xiao-Cheng; Roeper, Preston; Su, Qiang; Peng, Chun-Rong; Yin, Jing-Yuan; Ding, Juan; Li, HaiPeng; Lu, Wen-Cong

    2013-03-01

    Knowledge of the mechanism of HIV protease cleavage specificity is critical to the design of specific and effective HIV inhibitors. Searching for an accurate, robust, and rapid method to correctly predict the cleavage sites in proteins is crucial when searching for possible HIV inhibitors. In this article, HIV-1 protease specificity was studied using the correlation-based feature subset (CfsSubset) selection method combined with Genetic Algorithms method. Thirty important biochemical features were found based on a jackknife test from the original data set containing 4,248 features. By using the AdaBoost method with the thirty selected features the prediction model yields an accuracy of 96.7% for the jackknife test and 92.1% for an independent set test, with increased accuracy over the original dataset by 6.7% and 77.4%, respectively. Our feature selection scheme could be a useful technique for finding effective competitive inhibitors of HIV protease.

  7. Conspicuity assessment of selected propeller and tail rotor paint schemes.

    DOT National Transportation Integrated Search

    1978-08-01

    An investigation was conducted to rank the conspicuity of three paint schemes for airplane propellers and two schemes tail rotor blades previously recommended by the U.S. military and British Civil Aviation Authority. Thirty volunteer subjects with n...

  8. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    NASA Astrophysics Data System (ADS)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.

  9. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    PubMed Central

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods. PMID:28120883

  10. A multiple kernel support vector machine scheme for feature selection and rule extraction from gene expression data of cancer tissue.

    PubMed

    Chen, Zhenyu; Li, Jianping; Wei, Liwei

    2007-10-01

    Recently, gene expression profiling using microarray techniques has been shown as a promising tool to improve the diagnosis and treatment of cancer. Gene expression data contain high level of noise and the overwhelming number of genes relative to the number of available samples. It brings out a great challenge for machine learning and statistic techniques. Support vector machine (SVM) has been successfully used to classify gene expression data of cancer tissue. In the medical field, it is crucial to deliver the user a transparent decision process. How to explain the computed solutions and present the extracted knowledge becomes a main obstacle for SVM. A multiple kernel support vector machine (MK-SVM) scheme, consisting of feature selection, rule extraction and prediction modeling is proposed to improve the explanation capacity of SVM. In this scheme, we show that the feature selection problem can be translated into an ordinary multiple parameters learning problem. And a shrinkage approach: 1-norm based linear programming is proposed to obtain the sparse parameters and the corresponding selected features. We propose a novel rule extraction approach using the information provided by the separating hyperplane and support vectors to improve the generalization capacity and comprehensibility of rules and reduce the computational complexity. Two public gene expression datasets: leukemia dataset and colon tumor dataset are used to demonstrate the performance of this approach. Using the small number of selected genes, MK-SVM achieves encouraging classification accuracy: more than 90% for both two datasets. Moreover, very simple rules with linguist labels are extracted. The rule sets have high diagnostic power because of their good classification performance.

  11. The Importance of Neighborhood Scheme Selection in Agent-based Tumor Growth Modeling.

    PubMed

    Tzedakis, Georgios; Tzamali, Eleftheria; Marias, Kostas; Sakkalis, Vangelis

    2015-01-01

    Modeling tumor growth has proven a very challenging problem, mainly due to the fact that tumors are highly complex systems that involve dynamic interactions spanning multiple scales both in time and space. The desire to describe interactions in various scales has given rise to modeling approaches that use both continuous and discrete variables, known as hybrid approaches. This work refers to a hybrid model on a 2D square lattice focusing on cell movement dynamics as they play an important role in tumor morphology, invasion and metastasis and are considered as indicators for the stage of malignancy used for early prognosis and effective treatment. Considering various distributions of the microenvironment, we explore how Neumann vs. Moore neighborhood schemes affects tumor growth and morphology. The results indicate that the importance of neighborhood selection is critical under specific conditions that include i) increased hapto/chemo-tactic coefficient, ii) a rugged microenvironment and iii) ECM degradation.

  12. Feed-forward frequency offset estimation for 32-QAM optical coherent detection.

    PubMed

    Xiao, Fei; Lu, Jianing; Fu, Songnian; Xie, Chenhui; Tang, Ming; Tian, Jinwen; Liu, Deming

    2017-04-17

    Due to the non-rectangular distribution of the constellation points, traditional fast Fourier transform based frequency offset estimation (FFT-FOE) is no longer suitable for 32-QAM signal. Here, we report a modified FFT-FOE technique by selecting and digitally amplifying the inner QPSK ring of 32-QAM after the adaptive equalization, which is defined as QPSK-selection assisted FFT-FOE. Simulation results show that no FOE error occurs with a FFT size of only 512 symbols, when the signal-to-noise ratio (SNR) is above 17.5 dB using our proposed FOE technique. However, the error probability of traditional FFT-FOE scheme for 32-QAM is always intolerant. Finally, our proposed FOE scheme functions well for 10 Gbaud dual polarization (DP)-32-QAM signal to reach 20% forward error correction (FEC) threshold of BER=2×10-2, under the scenario of back-to-back (B2B) transmission.

  13. Research on comprehensive decision-making of PV power station connecting system

    NASA Astrophysics Data System (ADS)

    Zhou, Erxiong; Xin, Chaoshan; Ma, Botao; Cheng, Kai

    2018-04-01

    In allusion to the incomplete indexes system and not making decision on the subjectivity and objectivity of PV power station connecting system, based on the combination of improved Analytic Hierarchy Process (AHP), Criteria Importance Through Intercriteria Correlation (CRITIC) as well as grey correlation degree analysis (GCDA) is comprehensively proposed to select the appropriate system connecting scheme of PV power station. Firstly, indexes of PV power station connecting system are divided the recursion order hierarchy and calculated subjective weight by the improved AHP. Then, CRITIC is adopted to determine the objective weight of each index through the comparison intensity and conflict between indexes. The last the improved GCDA is applied to screen the optimal scheme, so as to, from the subjective and objective angle, select the connecting system. Comprehensive decision of Xinjiang PV power station is conducted and reasonable analysis results are attained. The research results might provide scientific basis for investment decision.

  14. Adaptive fuzzy-neural-network control for maglev transportation system.

    PubMed

    Wai, Rong-Jong; Lee, Jeng-Dao

    2008-01-01

    A magnetic-levitation (maglev) transportation system including levitation and propulsion control is a subject of considerable scientific interest because of highly nonlinear and unstable behaviors. In this paper, the dynamic model of a maglev transportation system including levitated electromagnets and a propulsive linear induction motor (LIM) based on the concepts of mechanical geometry and motion dynamics is developed first. Then, a model-based sliding-mode control (SMC) strategy is introduced. In order to alleviate chattering phenomena caused by the inappropriate selection of uncertainty bound, a simple bound estimation algorithm is embedded in the SMC strategy to form an adaptive sliding-mode control (ASMC) scheme. However, this estimation algorithm is always a positive value so that tracking errors introduced by any uncertainty will cause the estimated bound increase even to infinity with time. Therefore, it further designs an adaptive fuzzy-neural-network control (AFNNC) scheme by imitating the SMC strategy for the maglev transportation system. In the model-free AFNNC, online learning algorithms are designed to cope with the problem of chattering phenomena caused by the sign action in SMC design, and to ensure the stability of the controlled system without the requirement of auxiliary compensated controllers despite the existence of uncertainties. The outputs of the AFNNC scheme can be directly supplied to the electromagnets and LIM without complicated control transformations for relaxing strict constrains in conventional model-based control methodologies. The effectiveness of the proposed control schemes for the maglev transportation system is verified by numerical simulations, and the superiority of the AFNNC scheme is indicated in comparison with the SMC and ASMC strategies.

  15. Random forest feature selection, fusion and ensemble strategy: Combining multiple morphological MRI measures to discriminate among healhy elderly, MCI, cMCI and alzheimer's disease patients: From the alzheimer's disease neuroimaging initiative (ADNI) database.

    PubMed

    Dimitriadis, S I; Liparas, Dimitris; Tsolaki, Magda N

    2018-05-15

    In the era of computer-assisted diagnostic tools for various brain diseases, Alzheimer's disease (AD) covers a large percentage of neuroimaging research, with the main scope being its use in daily practice. However, there has been no study attempting to simultaneously discriminate among Healthy Controls (HC), early mild cognitive impairment (MCI), late MCI (cMCI) and stable AD, using features derived from a single modality, namely MRI. Based on preprocessed MRI images from the organizers of a neuroimaging challenge, 3 we attempted to quantify the prediction accuracy of multiple morphological MRI features to simultaneously discriminate among HC, MCI, cMCI and AD. We explored the efficacy of a novel scheme that includes multiple feature selections via Random Forest from subsets of the whole set of features (e.g. whole set, left/right hemisphere etc.), Random Forest classification using a fusion approach and ensemble classification via majority voting. From the ADNI database, 60 HC, 60 MCI, 60 cMCI and 60 CE were used as a training set with known labels. An extra dataset of 160 subjects (HC: 40, MCI: 40, cMCI: 40 and AD: 40) was used as an external blind validation dataset to evaluate the proposed machine learning scheme. In the second blind dataset, we succeeded in a four-class classification of 61.9% by combining MRI-based features with a Random Forest-based Ensemble Strategy. We achieved the best classification accuracy of all teams that participated in this neuroimaging competition. The results demonstrate the effectiveness of the proposed scheme to simultaneously discriminate among four groups using morphological MRI features for the very first time in the literature. Hence, the proposed machine learning scheme can be used to define single and multi-modal biomarkers for AD. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. An automated multi-scale network-based scheme for detection and location of seismic sources

    NASA Astrophysics Data System (ADS)

    Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.

    2017-12-01

    We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.

  17. Covering All the Bases: A Model Hazardous Waste Program for Small Universities.

    ERIC Educational Resources Information Center

    MacPherson, Robert A.

    1991-01-01

    The Colorado School of Mines' experience illustrates that with good planning and enough money, a small university can provide a high level of waste management service, complying with government regulations. Considerations in developing the plan include a segregation scheme for incompatible materials, vehicle selection, and costs of ongoing…

  18. Integrating genomics into future approaches for cocoa selection and propagation in Côte d’Ivoire

    USDA-ARS?s Scientific Manuscript database

    In Côte-d’Ivoire cocoa breeding is based on a reciprocal recurrent scheme that has been prepared with the aim of improving simultaneously the characteristics of the two main populations: Upper Amazon and Lower Amazon+ Trinitario. Resistance to Phytophtora and Cacao Swollen Shoot Virus has become th...

  19. Optimization of genomic selection training populations with a genetic algorithm

    USDA-ARS?s Scientific Manuscript database

    In this article, we derive a computationally efficient statistic to measure the reliability of estimates of genetic breeding values for a fixed set of genotypes based on a given training set of genotypes and phenotypes. We adopt a genetic algorithm scheme to find a training set of certain size from ...

  20. Continuous recovery of valine in a model mixture of amino acids and salt from Corynebacterium bacteria fermentation using a simulated moving bed chromatography.

    PubMed

    Park, Chanhun; Nam, Hee-Geun; Jo, Se-Hee; Wang, Nien-Hwa Linda; Mun, Sungyong

    2016-02-26

    The economical efficiency of valine production in related industries is largely affected by the performance of a valine separation process, in which valine is to be separated from leucine, alanine, and ammonium sulfate. Such separation is currently handled by a batch-mode hybrid process based on ion-exchange and crystallization schemes. To make a substantial improvement in the economical efficiency of an industrial valine production, such a batch-mode process based on two different separation schemes needs to be converted into a continuous-mode separation process based on a single separation scheme. To address this issue, a simulated moving bed (SMB) technology was applied in this study to the development of a continuous-mode valine-separation chromatographic process with uniformity in adsorbent and liquid phases. It was first found that a Chromalite-PCG600C resin could be eligible for the adsorbent of such process, particularly in an industrial scale. The intrinsic parameters of each component on the Chromalite-PCG600C adsorbent were determined and then utilized in selecting a proper set of configurations for SMB units, columns, and ports, under which the SMB operating parameters were optimized with a genetic algorithm. Finally, the optimized SMB based on the selected configurations was tested experimentally, which confirmed its effectiveness in continuous separation of valine from leucine, alanine, ammonium sulfate with high purity, high yield, high throughput, and high valine product concentration. It is thus expected that the developed SMB process in this study will be able to serve as one of the trustworthy ways of improving the economical efficiency of an industrial valine production process. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. A global earthquake discrimination scheme to optimize ground-motion prediction equation selection

    USGS Publications Warehouse

    Garcia, Daniel; Wald, David J.; Hearne, Michael

    2012-01-01

    We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

  2. Design of a Vehicle-Based Intervention System to Prevent Ozone Loss

    NASA Technical Reports Server (NTRS)

    Mason, William H.; Kirchbaum, Nathan; Kay, Jacob; Benoliel, Alexander M.; Lynn, Sean R.; Bunker, Deborah; Hesbach, Thomas D., Jr.; Howerton, Everett B.; Hreinsson, Gudbjoern; Mistr, E. Kirk

    1993-01-01

    Reduced quantities of ozone in the atmosphere allow greater levels of ultraviolet (UV) radiation to reach the earth's surface. The 1992/1993 project goals for the Virginia Tech Senior Design Team were to 1) understand the processes which contribute to stratospheric ozone loss, 2) examine ways to prevent ozone loss, and 3) define the requirements for an implementation vehicle to carry out the prevention scheme. A scheme proposed by R.J. Cicerone, el al late in 1991 was selected because of its supporting research and economic feasibility. This scheme uses hydrocarbon injected into the Antarctic ozone hole to form stable compounds with free chlorine, thus reducing ozone depletion. A study of the hydrocarbon injection requirements determined that 130 aircraft traveling Mach 2.4 at a maximum altitude of 66,000 ft. would provide the most economic approach to preventing ozone loss. Each aircraft would require an 8,000 nm. range and be able to carry 35,000 lbs. of propane. The propane would be stored in a three-tank high pressure system. Modularity and multi-role functionality were selected to be key design features. Missions originate from airports located in South America and Australia.

  3. Development of a general analysis and unfolding scheme and its application to measure the energy spectrum of atmospheric neutrinos with IceCube: IceCube Collaboration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aartsen, M. G.; Ackermann, M.; Adams, J.

    Here we present the development and application of a generic analysis scheme for the measurement of neutrino spectra with the IceCube detector. This scheme is based on regularized unfolding, preceded by an event selection which uses a Minimum Redundancy Maximum Relevance algorithm to select the relevant variables and a random forest for the classification of events. The analysis has been developed using IceCube data from the 59-string configuration of the detector. 27,771 neutrino candidates were detected in 346 days of livetime. A rejection of 99.9999 % of the atmospheric muon background is achieved. The energy spectrum of the atmospheric neutrinomore » flux is obtained using the TRUEE unfolding program. The unfolded spectrum of atmospheric muon neutrinos covers an energy range from 100 GeV to 1 PeV. Compared to the previous measurement using the detector in the 40-string configuration, the analysis presented here, extends the upper end of the atmospheric neutrino spectrum by more than a factor of two, reaching an energy region that has not been previously accessed by spectral measurements.« less

  4. Development of a general analysis and unfolding scheme and its application to measure the energy spectrum of atmospheric neutrinos with IceCube: IceCube Collaboration

    DOE PAGES

    Aartsen, M. G.; Ackermann, M.; Adams, J.; ...

    2015-03-11

    Here we present the development and application of a generic analysis scheme for the measurement of neutrino spectra with the IceCube detector. This scheme is based on regularized unfolding, preceded by an event selection which uses a Minimum Redundancy Maximum Relevance algorithm to select the relevant variables and a random forest for the classification of events. The analysis has been developed using IceCube data from the 59-string configuration of the detector. 27,771 neutrino candidates were detected in 346 days of livetime. A rejection of 99.9999 % of the atmospheric muon background is achieved. The energy spectrum of the atmospheric neutrinomore » flux is obtained using the TRUEE unfolding program. The unfolded spectrum of atmospheric muon neutrinos covers an energy range from 100 GeV to 1 PeV. Compared to the previous measurement using the detector in the 40-string configuration, the analysis presented here, extends the upper end of the atmospheric neutrino spectrum by more than a factor of two, reaching an energy region that has not been previously accessed by spectral measurements.« less

  5. Understanding the types of fraud in claims to South African medical schemes.

    PubMed

    Legotlo, T G; Mutezo, A

    2018-03-28

    Medical schemes play a significant role in funding private healthcare in South Africa (SA). However, the sector is negatively affected by the high rate of fraudulent claims. To identify the types of fraudulent activities committed in SA medical scheme claims. A cross-sectional qualitative study was conducted, adopting a case study strategy. A sample of 15 employees was purposively selected from a single medical scheme administration company in SA. Semi-structured interviews were conducted to collect data from study participants. A thematic analysis of the data was done using ATLAS.ti software (ATLAS.ti Scientific Software Development, Germany). The study population comprised the 17 companies that administer medical schemes in SA. Data were collected from 15 study participants, who were selected from the medical scheme administrator chosen as a case study. The study found that medical schemes were defrauded in numerous ways. The perpetrators of this type of fraud include healthcare service providers, medical scheme members, employees, brokers and syndicates. Medical schemes are mostly defrauded by the submission of false claims by service providers and syndicates. Fraud committed by medical scheme members encompasses the sharing of medical scheme benefits with non-members (card farming) and non-disclosure of pre-existing conditions at the application stage. The study concluded that perpetrators of fraud have found several ways of defrauding SA medical schemes regarding claims. Understanding and identifying the types of fraud events facing medical schemes is the initial step towards establishing methods to mitigate this risk. Future studies should examine strategies to manage fraudulent medical scheme claims.

  6. Establishing sustainable performance-based incentive schemes: views of rural health workers from qualitative research in three sub-Saharan African countries.

    PubMed

    Yé, M; Aninanya, G A; Sié, A; Kakoko, D C V; Chatio, S; Kagoné, M; Prytherch, H; Loukanova, S; Williams, J E; Sauerborn, R

    2014-01-01

    Performance-based incentives (PBIs) are currently receiving attention as a strategy for improving the quality of care that health providers deliver. Experiences from several African countries have shown that PBIs can trigger improvements, particularly in the area of maternal and neonatal health. The involvement of health workers in deciding how their performance should be measured is recommended. Only limited information is available about how such schemes can be made sustainable. This study explored the types of PBIs that rural health workers suggested, their ideas regarding the management and sustainability of such schemes, and their views on which indicators best lend themselves to the monitoring of performance. In this article the authors reported the findings from a cross-country survey conducted in Burkina Faso, Ghana and Tanzania. The study was exploratory with qualitative methodology. In-depth interviews were conducted with 29 maternal and neonatal healthcare providers, four district health managers and two policy makers (total 35 respondents) from one district in each of the three countries. The respondents were purposively selected from six peripheral health facilities. Care was taken to include providers who had a management role. By also including respondents from district and policy level a comparison of perspectives from different levels of the health system was facilitated. The data that was collected was coded and analysed with support of NVivo v8 software. The most frequently suggested PBIs amongst the respondents in Burkina Faso were training with per-diems, bonuses and recognition of work done. The respondents in Tanzania favoured training with per-diems, as well as payment of overtime, and timely promotion. The respondents in Ghana also called for training, including paid study leave, payment of overtime and recognition schemes for health workers or facilities. Respondents in the three countries supported the mobilisation of local resources to make incentive schemes more sustainable. There was a general view that it was easier to integrate the cost of non-financial incentives in local budgets. There were concerns about the fairness of such schemes from the provider level in all three countries. District managers were worried about the workload that would be required to manage the schemes. The providers themselves were less clear about which indicators best lent themselves to the purpose of performance monitoring. District managers and policy makers most commonly suggested indicators that were in line with national maternal and neonatal healthcare indicators. The study showed that health workers have considerable interest in performance-based incentive schemes and are concerned about their sustainability. There is a need to further explore the use of non-financial incentives in PBI schemes, as such incentives were considered to stand a greater chance of being integrated into local budgets. Ensuring participation of healthcare providers in the design of such schemes is likely to achieve buy-in and endorsement from the health workers involved. However, input from managers and policy makers is essential to keep expectations realistic and to ensure the indicators selected fit the purpose and are part of routine reporting systems.

  7. Optimization of breast mass classification using sequential forward floating selection (SFFS) and a support vector machine (SVM) model

    PubMed Central

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-01-01

    Purpose: Improving radiologists’ performance in classification between malignant and benign breast lesions is important to increase cancer detection sensitivity and reduce false-positive recalls. For this purpose, developing computer-aided diagnosis (CAD) schemes has been attracting research interest in recent years. In this study, we investigated a new feature selection method for the task of breast mass classification. Methods: We initially computed 181 image features based on mass shape, spiculation, contrast, presence of fat or calcifications, texture, isodensity, and other morphological features. From this large image feature pool, we used a sequential forward floating selection (SFFS)-based feature selection method to select relevant features, and analyzed their performance using a support vector machine (SVM) model trained for the classification task. On a database of 600 benign and 600 malignant mass regions of interest (ROIs), we performed the study using a ten-fold cross-validation method. Feature selection and optimization of the SVM parameters were conducted on the training subsets only. Results: The area under the receiver operating characteristic curve (AUC) = 0.805±0.012 was obtained for the classification task. The results also showed that the most frequently-selected features by the SFFS-based algorithm in 10-fold iterations were those related to mass shape, isodensity and presence of fat, which are consistent with the image features frequently used by radiologists in the clinical environment for mass classification. The study also indicated that accurately computing mass spiculation features from the projection mammograms was difficult, and failed to perform well for the mass classification task due to tissue overlap within the benign mass regions. Conclusions: In conclusion, this comprehensive feature analysis study provided new and valuable information for optimizing computerized mass classification schemes that may have potential to be useful as a “second reader” in future clinical practice. PMID:24664267

  8. Future efficiency of run of the river hydropower schemes based on climate change scenarios: case study in UK catchments

    NASA Astrophysics Data System (ADS)

    Pasten Zapata, Ernesto; Moggridge, Helen; Jones, Julie; Widmann, Martin

    2017-04-01

    Run-of-the-River (ROR) hydropower schemes are expected to be importantly affected by climate change as they rely in the availability of river flow to generate energy. As temperature and precipitation are expected to vary in the future, the hydrological cycle will also undergo changes. Therefore, climate models based on complex physical atmospheric interactions have been developed to simulate future climate scenarios considering the atmosphere's greenhouse gas concentrations. These scenarios are classified according to the Representative Concentration Pathways (RCP) that are generated according to the concentration of greenhouse gases. This study evaluates possible scenarios for selected ROR hydropower schemes within the UK, considering three different RCPs: 2.6, 4.5 and 8.5 W/m2 for 2100 relative to pre-industrial values. The study sites cover different climate, land cover, topographic and hydropower scheme characteristics representative of the UK's heterogeneity. Precipitation and temperature outputs from state-of-the-art Regional Climate Models (RCMs) from the Euro-CORDEX project are used as input for a HEC-HMS hydrological model to simulate the future river flow available. Both uncorrected and bias-corrected RCM simulations are analyzed. The results of this project provide an insight of the possible effects of climate change towards the generation of power from the ROR hydropower schemes according to the different RCP scenarios and contrasts the results obtained from uncorrected and bias-corrected RCMs. This analysis can aid on the adaptation to climate change as well as the planning of future ROR schemes in the region.

  9. A reliable transmission protocol for ZigBee-based wireless patient monitoring.

    PubMed

    Chen, Shyr-Kuen; Kao, Tsair; Chan, Chia-Tai; Huang, Chih-Ning; Chiang, Chih-Yen; Lai, Chin-Yu; Tung, Tse-Hua; Wang, Pi-Chung

    2012-01-01

    Patient monitoring systems are gaining their importance as the fast-growing global elderly population increases demands for caretaking. These systems use wireless technologies to transmit vital signs for medical evaluation. In a multihop ZigBee network, the existing systems usually use broadcast or multicast schemes to increase the reliability of signals transmission; however, both the schemes lead to significantly higher network traffic and end-to-end transmission delay. In this paper, we present a reliable transmission protocol based on anycast routing for wireless patient monitoring. Our scheme automatically selects the closest data receiver in an anycast group as a destination to reduce the transmission latency as well as the control overhead. The new protocol also shortens the latency of path recovery by initiating route recovery from the intermediate routers of the original path. On the basis of a reliable transmission scheme, we implement a ZigBee device for fall monitoring, which integrates fall detection, indoor positioning, and ECG monitoring. When the triaxial accelerometer of the device detects a fall, the current position of the patient is transmitted to an emergency center through a ZigBee network. In order to clarify the situation of the fallen patient, 4-s ECG signals are also transmitted. Our transmission scheme ensures the successful transmission of these critical messages. The experimental results show that our scheme is fast and reliable. We also demonstrate that our devices can seamlessly integrate with the next generation technology of wireless wide area network, worldwide interoperability for microwave access, to achieve real-time patient monitoring.

  10. Multilayer Volume Holographic Optical Memory

    NASA Technical Reports Server (NTRS)

    Markov, Vladimir; Millerd, James; Trolinger, James; Norrie, Mark; Downie, John; Timucin, Dogan; Lau, Sonie (Technical Monitor)

    1998-01-01

    We demonstrate a scheme for volume holographic storage based on the features of shift selectivity of a speckle reference wave hologram. The proposed recording method allows more efficient use of the recording medium and increases the storage density in comparison with spherical or plane-wave reference beams. Experimental results of multiple hologram storage and replay in a photorefractive crystal of iron-doped lithium niobate are presented. The mechanism of lateral and longitudinal shift selectivity are described theoretically and shown to agree with experimental measurements.

  11. A Jacobi collocation approximation for nonlinear coupled viscous Burgers' equation

    NASA Astrophysics Data System (ADS)

    Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohamed A.; Hafez, Ramy M.

    2014-02-01

    This article presents a numerical approximation of the initial-boundary nonlinear coupled viscous Burgers' equation based on spectral methods. A Jacobi-Gauss-Lobatto collocation (J-GL-C) scheme in combination with the implicit Runge-Kutta-Nyström (IRKN) scheme are employed to obtain highly accurate approximations to the mentioned problem. This J-GL-C method, based on Jacobi polynomials and Gauss-Lobatto quadrature integration, reduces solving the nonlinear coupled viscous Burgers' equation to a system of nonlinear ordinary differential equation which is far easier to solve. The given examples show, by selecting relatively few J-GL-C points, the accuracy of the approximations and the utility of the approach over other analytical or numerical methods. The illustrative examples demonstrate the accuracy, efficiency, and versatility of the proposed algorithm.

  12. Differential diagnosis of CT focal liver lesions using texture features, feature selection and ensemble driven classifiers.

    PubMed

    Mougiakakou, Stavroula G; Valavanis, Ioannis K; Nikita, Alexandra; Nikita, Konstantina S

    2007-09-01

    The aim of the present study is to define an optimally performing computer-aided diagnosis (CAD) architecture for the classification of liver tissue from non-enhanced computed tomography (CT) images into normal liver (C1), hepatic cyst (C2), hemangioma (C3), and hepatocellular carcinoma (C4). To this end, various CAD architectures, based on texture features and ensembles of classifiers (ECs), are comparatively assessed. Number of regions of interests (ROIs) corresponding to C1-C4 have been defined by experienced radiologists in non-enhanced liver CT images. For each ROI, five distinct sets of texture features were extracted using first order statistics, spatial gray level dependence matrix, gray level difference method, Laws' texture energy measures, and fractal dimension measurements. Two different ECs were constructed and compared. The first one consists of five multilayer perceptron neural networks (NNs), each using as input one of the computed texture feature sets or its reduced version after genetic algorithm-based feature selection. The second EC comprised five different primary classifiers, namely one multilayer perceptron NN, one probabilistic NN, and three k-nearest neighbor classifiers, each fed with the combination of the five texture feature sets or their reduced versions. The final decision of each EC was extracted by using appropriate voting schemes, while bootstrap re-sampling was utilized in order to estimate the generalization ability of the CAD architectures based on the available relatively small-sized data set. The best mean classification accuracy (84.96%) is achieved by the second EC using a fused feature set, and the weighted voting scheme. The fused feature set was obtained after appropriate feature selection applied to specific subsets of the original feature set. The comparative assessment of the various CAD architectures shows that combining three types of classifiers with a voting scheme, fed with identical feature sets obtained after appropriate feature selection and fusion, may result in an accurate system able to assist differential diagnosis of focal liver lesions from non-enhanced CT images.

  13. Automated identification of abnormal metaphase chromosome cells for the detection of chronic myeloid leukemia using microscopic images

    NASA Astrophysics Data System (ADS)

    Wang, Xingwei; Zheng, Bin; Li, Shibo; Mulvihill, John J.; Chen, Xiaodong; Liu, Hong

    2010-07-01

    Karyotyping is an important process to classify chromosomes into standard classes and the results are routinely used by the clinicians to diagnose cancers and genetic diseases. However, visual karyotyping using microscopic images is time-consuming and tedious, which reduces the diagnostic efficiency and accuracy. Although many efforts have been made to develop computerized schemes for automated karyotyping, no schemes can get be performed without substantial human intervention. Instead of developing a method to classify all chromosome classes, we develop an automatic scheme to detect abnormal metaphase cells by identifying a specific class of chromosomes (class 22) and prescreen for suspicious chronic myeloid leukemia (CML). The scheme includes three steps: (1) iteratively segment randomly distributed individual chromosomes, (2) process segmented chromosomes and compute image features to identify the candidates, and (3) apply an adaptive matching template to identify chromosomes of class 22. An image data set of 451 metaphase cells extracted from bone marrow specimens of 30 positive and 30 negative cases for CML is selected to test the scheme's performance. The overall case-based classification accuracy is 93.3% (100% sensitivity and 86.7% specificity). The results demonstrate the feasibility of applying an automated scheme to detect or prescreen the suspicious cancer cases.

  14. Matching soil salinization and cropping systems in communally managed irrigation schemes

    NASA Astrophysics Data System (ADS)

    Malota, Mphatso; Mchenga, Joshua

    2018-03-01

    Occurrence of soil salinization in irrigation schemes can be a good indicator to introduce high salt tolerant crops in irrigation schemes. This study assessed the level of soil salinization in a communally managed 233 ha Nkhate irrigation scheme in the Lower Shire Valley region of Malawi. Soil samples were collected within the 0-0.4 m soil depth from eight randomly selected irrigation blocks. Irrigation water samples were also collected from five randomly selected locations along the Nkhate River which supplies irrigation water to the scheme. Salinity of both the soil and the irrigation water samples was determined using an electrical conductivity (EC) meter. Analysis of the results indicated that even for very low salinity tolerant crops (ECi < 2 dS/m), the irrigation water was suitable for irrigation purposes. However, root-zone soil salinity profiles depicted that leaching of salts was not adequate and that the leaching requirement for the scheme needs to be relooked and always be adhered to during irrigation operation. The study concluded that the crop system at the scheme needs to be adjusted to match with prevailing soil and irrigation water salinity levels.

  15. Cavity BPM with Dipole-Mode-Selective Coupler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zenghai; Johnson, Ronald; Smith, Stephen R.

    2006-06-21

    In this paper, we present a novel position sensitive signal pickup scheme for a cavity BPM. The scheme utilizes the H-plane of the waveguide to couple magnetically to the side of the cavity, which results in a selective coupling to the dipole mode and a total rejection of the monopole mode. This scheme greatly simplifies the BPM geometry and relaxes machining tolerances. We will present detailed numerical studies on such a cavity BPM, analyze its resolution limit and tolerance requirements for a nanometer resolution. Finally present the measurement results of a X-band prototype.

  16. Access point selection game with mobile users using correlated equilibrium.

    PubMed

    Sohn, Insoo

    2015-01-01

    One of the most important issues in wireless local area network (WLAN) systems with multiple access points (APs) is the AP selection problem. Game theory is a mathematical tool used to analyze the interactions in multiplayer systems and has been applied to various problems in wireless networks. Correlated equilibrium (CE) is one of the powerful game theory solution concepts, which is more general than the Nash equilibrium for analyzing the interactions in multiplayer mixed strategy games. A game-theoretic formulation of the AP selection problem with mobile users is presented using a novel scheme based on a regret-based learning procedure. Through convergence analysis, we show that the joint actions based on the proposed algorithm achieve CE. Simulation results illustrate that the proposed algorithm is effective in a realistic WLAN environment with user mobility and achieves maximum system throughput based on the game-theoretic formulation.

  17. Access Point Selection Game with Mobile Users Using Correlated Equilibrium

    PubMed Central

    Sohn, Insoo

    2015-01-01

    One of the most important issues in wireless local area network (WLAN) systems with multiple access points (APs) is the AP selection problem. Game theory is a mathematical tool used to analyze the interactions in multiplayer systems and has been applied to various problems in wireless networks. Correlated equilibrium (CE) is one of the powerful game theory solution concepts, which is more general than the Nash equilibrium for analyzing the interactions in multiplayer mixed strategy games. A game-theoretic formulation of the AP selection problem with mobile users is presented using a novel scheme based on a regret-based learning procedure. Through convergence analysis, we show that the joint actions based on the proposed algorithm achieve CE. Simulation results illustrate that the proposed algorithm is effective in a realistic WLAN environment with user mobility and achieves maximum system throughput based on the game-theoretic formulation. PMID:25785726

  18. Energy spectrum and thermal properties of a terahertz quantum-cascade laser based on the resonant-phonon depopulation scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khabibullin, R. A., E-mail: khabibullin@isvch.ru; Shchavruk, N. V.; Klochkov, A. N.

    The dependences of the electronic-level positions and transition oscillator strengths on an applied electric field are studied for a terahertz quantum-cascade laser (THz QCL) with the resonant-phonon depopulation scheme, based on a cascade consisting of three quantum wells. The electric-field strengths for two characteristic states of the THz QCL under study are calculated: (i) “parasitic” current flow in the structure when the lasing threshold has not yet been reached; (ii) the lasing threshold is reached. Heat-transfer processes in the THz QCL under study are simulated to determine the optimum supply and cooling conditions. The conditions of thermocompression bonding of themore » laser ridge stripe with an n{sup +}-GaAs conductive substrate based on Au–Au are selected to produce a mechanically stronger contact with a higher thermal conductivity.« less

  19. Visual Privacy by Context: Proposal and Evaluation of a Level-Based Visualisation Scheme

    PubMed Central

    Padilla-López, José Ramón; Chaaraoui, Alexandros Andre; Gu, Feng; Flórez-Revuelta, Francisco

    2015-01-01

    Privacy in image and video data has become an important subject since cameras are being installed in an increasing number of public and private spaces. Specifically, in assisted living, intelligent monitoring based on computer vision can allow one to provide risk detection and support services that increase people's autonomy at home. In the present work, a level-based visualisation scheme is proposed to provide visual privacy when human intervention is necessary, such as at telerehabilitation and safety assessment applications. Visualisation levels are dynamically selected based on the previously modelled context. In this way, different levels of protection can be provided, maintaining the necessary intelligibility required for the applications. Furthermore, a case study of a living room, where a top-view camera is installed, is presented. Finally, the performed survey-based evaluation indicates the degree of protection provided by the different visualisation models, as well as the personal privacy preferences and valuations of the users. PMID:26053746

  20. Detection scheme for a partially occluded pedestrian based on occluded depth in lidar-radar sensor fusion

    NASA Astrophysics Data System (ADS)

    Kwon, Seong Kyung; Hyun, Eugin; Lee, Jin-Hee; Lee, Jonghun; Son, Sang Hyuk

    2017-11-01

    Object detections are critical technologies for the safety of pedestrians and drivers in autonomous vehicles. Above all, occluded pedestrian detection is still a challenging topic. We propose a new detection scheme for occluded pedestrian detection by means of lidar-radar sensor fusion. In the proposed method, the lidar and radar regions of interest (RoIs) have been selected based on the respective sensor measurement. Occluded depth is a new means to determine whether an occluded target exists or not. The occluded depth is a region projected out by expanding the longitudinal distance with maintaining the angle formed by the outermost two end points of the lidar RoI. The occlusion RoI is the overlapped region made by superimposing the radar RoI and the occluded depth. The object within the occlusion RoI is detected by the radar measurement information and the occluded object is estimated as a pedestrian based on human Doppler distribution. Additionally, various experiments are performed in detecting a partially occluded pedestrian in outdoor as well as indoor environments. According to experimental results, the proposed sensor fusion scheme has much better detection performance compared to the case without our proposed method.

  1. The Solution to Pollution is Distribution: Design Your Own Chaotic Flow

    NASA Astrophysics Data System (ADS)

    Tigera, R. G.; Roth, E. J.; Neupauer, R.; Mays, D. C.

    2015-12-01

    Plume spreading promotes the molecular mixing that drives chemical reactions in porous media in general, and remediation reactions in groundwater aquifers in particular. Theoretical analysis suggests that engineered injection and extraction, a specific sequence of pumping through wells surrounding a contaminant plume, can improve groundwater remediation through chaotic advection. Selection of an engineered injection and extraction scheme is difficult, however, because the engineer is faced with the difficulty of recommending a pumping scheme for a contaminated site without having any previous knowledge of how the scheme will perform. To address this difficulty, this presentation describes a Graphical User Interface (GUI) designed to help engineers develop, test, and observe pumping schemes as described in previous research (Mays, D.C. and Neupauer, R.M., 2012, Plume spreading in groundwater by stretching and folding, Water Resour. Res., 48, W07501, doi:10.1029/2011WR011567). The inputs allow the user to manipulate the model conditions such as number of wells, plume size, and pumping scheme. Plume evolution is modeled, assuming no diffusion or dispersion, using analytical solutions for injection or extraction through individual wells or pairs or wells (i.e., dipoles). Using the GUI, an engineered injection and extraction scheme can be determined that best fits the remediation needs of the contaminated site. By creating multiple injection and extraction schemes, the user can learn about the plume shapes created from different schemes and, ultimately, recommend a pumping scheme based on some experience of fluid flow as shown in the GUI. The pumping schemes developed through this GUI are expected to guide more advanced modeling and laboratory studies that account for the crucial role of dispersion in groundwater remediation.

  2. Adverse selection in a voluntary Rural Mutual Health Care health insurance scheme in China.

    PubMed

    Wang, Hong; Zhang, Licheng; Yip, Winnie; Hsiao, William

    2006-09-01

    This study examines adverse selection in a subsidized voluntary health insurance scheme, the Rural Mutual Health Care (RMHC) scheme, in a poor rural area of China. The study was made possible by a unique longitudinal data set: the total sample includes 3492 rural residents from 1020 households. Logistic regression was employed for the data analysis. The results show that although this subsidized scheme achieved a considerable high enrollment rate of 71% of rural residents, adverse selection still exists. In general, individuals with worse health status are more likely to enroll in RMHC than individuals with better health status. Although the household is set as the enrollment unit for the RMHC for the purpose of reducing adverse selection, nearly 1/3 of enrolled households are actually only partially enrolled. Furthermore, we found that adverse selection mainly occurs in partially enrolled households. The non-enrolled individuals in partially enrolled households have the best health status, while the enrolled individuals in partially enrolled households have the worst health status. Pre-RMHC, medical expenditure for enrolled individuals in partially enrolled households was 206.6 yuan per capita per year, which is 1.7 times as much as the pre-RMHC medical expenditure for non-enrolled individuals in partially enrolled households. The study also reveals that the pre-enrolled medical expenditure per capita per year of enrolled individuals was 9.6% higher than the pre-enrolled medical expenditure of all residents, including both enrolled and non-enrolled individuals. In conclusion, although the subsidized RMHC scheme reached a very high enrollment rate and the household is set as the enrollment unit for the purpose of reducing adverse selection, adverse selection still exists, especially within partially enrolled households. Voluntary RMHC will not be financially sustainable if the adverse selection is not fully taken into account.

  3. Joint Transmit Antenna Selection and Power Allocation for ISDF Relaying Mobile-to-Mobile Sensor Networks

    PubMed Central

    Xu, Lingwei; Zhang, Hao; Gulliver, T. Aaron

    2016-01-01

    The outage probability (OP) performance of multiple-relay incremental-selective decode-and-forward (ISDF) relaying mobile-to-mobile (M2M) sensor networks with transmit antenna selection (TAS) over N-Nakagami fading channels is investigated. Exact closed-form OP expressions for both optimal and suboptimal TAS schemes are derived. The power allocation problem is formulated to determine the optimal division of transmit power between the broadcast and relay phases. The OP performance under different conditions is evaluated via numerical simulation to verify the analysis. These results show that the optimal TAS scheme has better OP performance than the suboptimal scheme. Further, the power allocation parameter has a significant influence on the OP performance. PMID:26907282

  4. Force sharing in high-power parallel servo-actuators

    NASA Technical Reports Server (NTRS)

    Neal, T. P.

    1974-01-01

    The various existing force sharing schemes were examined by conducting a literature survey. A list of potentially applicable concepts was compiled from this survey, and a brief analysis was then made of each concept, which resulted in two competing schemes being selected for in-depth evaluation. A functional design of the equalization logic for the two schemes was undertaken and specific space shuttle application was chosen for experimental evaluation. The application was scaled down so that existing hardware could be utilized. Next, an analog computer study was conducted to evaluate the more important characteristics of the two competing force sharing schemes. On the basis of the computers study, a final configuration was selected. A load simulator was then designed to evaluate this configuration on actual hardware.

  5. Information-theoretic CAD system in mammography: Entropy-based indexing for computational efficiency and robust performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee

    2007-08-15

    We have previously presented a knowledge-based computer-assisted detection (KB-CADe) system for the detection of mammographic masses. The system is designed to compare a query mammographic region with mammographic templates of known ground truth. The templates are stored in an adaptive knowledge database. Image similarity is assessed with information theoretic measures (e.g., mutual information) derived directly from the image histograms. A previous study suggested that the diagnostic performance of the system steadily improves as the knowledge database is initially enriched with more templates. However, as the database increases in size, an exhaustive comparison of the query case with each stored templatemore » becomes computationally burdensome. Furthermore, blind storing of new templates may result in redundancies that do not necessarily improve diagnostic performance. To address these concerns we investigated an entropy-based indexing scheme for improving the speed of analysis and for satisfying database storage restrictions without compromising the overall diagnostic performance of our KB-CADe system. The indexing scheme was evaluated on two different datasets as (i) a search mechanism to sort through the knowledge database, and (ii) a selection mechanism to build a smaller, concise knowledge database that is easier to maintain but still effective. There were two important findings in the study. First, entropy-based indexing is an effective strategy to identify fast a subset of templates that are most relevant to a given query. Only this subset could be analyzed in more detail using mutual information for optimized decision making regarding the query. Second, a selective entropy-based deposit strategy may be preferable where only high entropy cases are maintained in the knowledge database. Overall, the proposed entropy-based indexing scheme was shown to reduce the computational cost of our KB-CADe system by 55% to 80% while maintaining the system's diagnostic performance.« less

  6. Reliability Constrained Priority Load Shedding for Aerospace Power System Automation

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Zhu, Jizhong; Kaddah, Sahar S.; Dolce, James L. (Technical Monitor)

    2000-01-01

    The need for improving load shedding on board the space station is one of the goals of aerospace power system automation. To accelerate the optimum load-shedding functions, several constraints must be involved. These constraints include congestion margin determined by weighted probability contingency, component/system reliability index, generation rescheduling. The impact of different faults and indices for computing reliability were defined before optimization. The optimum load schedule is done based on priority, value and location of loads. An optimization strategy capable of handling discrete decision making, such as Everett optimization, is proposed. We extended Everett method to handle expected congestion margin and reliability index as constraints. To make it effective for real time load dispatch process, a rule-based scheme is presented in the optimization method. It assists in selecting which feeder load to be shed, the location of the load, the value, priority of the load and cost benefit analysis of the load profile is included in the scheme. The scheme is tested using a benchmark NASA system consisting of generators, loads and network.

  7. Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.

    PubMed

    Lan, Cuiling; Shi, Guangming; Wu, Feng

    2010-04-01

    Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

  8. Applying deep learning technology to automatically identify metaphase chromosomes using scanning microscopic images: an initial investigation

    NASA Astrophysics Data System (ADS)

    Qiu, Yuchen; Lu, Xianglan; Yan, Shiju; Tan, Maxine; Cheng, Samuel; Li, Shibo; Liu, Hong; Zheng, Bin

    2016-03-01

    Automated high throughput scanning microscopy is a fast developing screening technology used in cytogenetic laboratories for the diagnosis of leukemia or other genetic diseases. However, one of the major challenges of using this new technology is how to efficiently detect the analyzable metaphase chromosomes during the scanning process. The purpose of this investigation is to develop a computer aided detection (CAD) scheme based on deep learning technology, which can identify the metaphase chromosomes with high accuracy. The CAD scheme includes an eight layer neural network. The first six layers compose of an automatic feature extraction module, which has an architecture of three convolution-max-pooling layer pairs. The 1st, 2nd and 3rd pair contains 30, 20, 20 feature maps, respectively. The seventh and eighth layers compose of a multiple layer perception (MLP) based classifier, which is used to identify the analyzable metaphase chromosomes. The performance of new CAD scheme was assessed by receiver operation characteristic (ROC) method. A number of 150 regions of interest (ROIs) were selected to test the performance of our new CAD scheme. Each ROI contains either interphase cell or metaphase chromosomes. The results indicate that new scheme is able to achieve an area under the ROC curve (AUC) of 0.886+/-0.043. This investigation demonstrates that applying a deep learning technique may enable to significantly improve the accuracy of the metaphase chromosome detection using a scanning microscopic imaging technology in the future.

  9. An Effective Delay Reduction Approach through a Portion of Nodes with a Larger Duty Cycle for Industrial WSNs

    PubMed Central

    Wu, Minrui; Wu, Yanhui; Liu, Chuyao; Cai, Zhiping; Ma, Ming

    2018-01-01

    For Industrial Wireless Sensor Networks (IWSNs), sending data with timely style to the stink (or control center, CC) that is monitored by sensor nodes is a challenging issue. However, in order to save energy, wireless sensor networks based on a duty cycle are widely used in the industrial field, which can bring great delay to data transmission. We observe that if the duty cycle of a small number of nodes in the network is set to 1, the sleep delay caused by the duty cycle can be effectively reduced. Thus, in this paper, a novel Portion of Nodes with Larger Duty Cycle (PNLDC) scheme is proposed to reduce delay and optimize energy efficiency for IWSNs. In the PNLDC scheme, a portion of nodes are selected to set their duty cycle to 1, and the proportion of nodes with the duty cycle of 1 is determined according to the energy abundance of the area in which the node is located. The more the residual energy in the region, the greater the proportion of the selected nodes. Because there are a certain proportion of nodes with the duty cycle of 1 in the network, the PNLDC scheme can effectively reduce delay in IWSNs. The performance analysis and experimental results show that the proposed scheme significantly reduces the delay for forwarding data by 8.9~26.4% and delay for detection by 2.1~24.6% without reducing the network lifetime when compared with the fixed duty cycle method. Meanwhile, compared with the dynamic duty cycle strategy, the proposed scheme has certain advantages in terms of energy utilization and delay reduction. PMID:29757236

  10. An Effective Delay Reduction Approach through a Portion of Nodes with a Larger Duty Cycle for Industrial WSNs.

    PubMed

    Wu, Minrui; Wu, Yanhui; Liu, Chuyao; Cai, Zhiping; Xiong, Neal N; Liu, Anfeng; Ma, Ming

    2018-05-12

    For Industrial Wireless Sensor Networks (IWSNs), sending data with timely style to the stink (or control center, CC) that is monitored by sensor nodes is a challenging issue. However, in order to save energy, wireless sensor networks based on a duty cycle are widely used in the industrial field, which can bring great delay to data transmission. We observe that if the duty cycle of a small number of nodes in the network is set to 1, the sleep delay caused by the duty cycle can be effectively reduced. Thus, in this paper, a novel Portion of Nodes with Larger Duty Cycle (PNLDC) scheme is proposed to reduce delay and optimize energy efficiency for IWSNs. In the PNLDC scheme, a portion of nodes are selected to set their duty cycle to 1, and the proportion of nodes with the duty cycle of 1 is determined according to the energy abundance of the area in which the node is located. The more the residual energy in the region, the greater the proportion of the selected nodes. Because there are a certain proportion of nodes with the duty cycle of 1 in the network, the PNLDC scheme can effectively reduce delay in IWSNs. The performance analysis and experimental results show that the proposed scheme significantly reduces the delay for forwarding data by 8.9~26.4% and delay for detection by 2.1~24.6% without reducing the network lifetime when compared with the fixed duty cycle method. Meanwhile, compared with the dynamic duty cycle strategy, the proposed scheme has certain advantages in terms of energy utilization and delay reduction.

  11. Experimental validation of convection-diffusion discretisation scheme employed for computational modelling of biological mass transport

    PubMed Central

    2010-01-01

    Background The finite volume solver Fluent (Lebanon, NH, USA) is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results. Methods An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK) schemes. The fluid has a diffusivity of 3.125 × 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow. Results The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140% and 116% was demonstrated between the experimental results and those obtained from the First-Order Upwind and Power Law schemes, respectively. However, both the Second-Order upwind and QUICK schemes accurately predict species concentration under high Peclet number, convection-dominated flow conditions. Conclusion Convection-diffusion discretisation scheme selection has a strong influence on resultant species concentration fields, as determined by CFD. Furthermore, either the Second-Order or QUICK discretisation schemes should be implemented when numerically modelling convection-dominated mass-transport conditions. Finally, care should be taken not to utilize computationally inexpensive discretisation schemes at the cost of accuracy in resultant species concentration. PMID:20642816

  12. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  13. Vibrational quasi-degenerate perturbation theory with optimized coordinates: Applications to ethylene and trans-1,3-butadiene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yagi, Kiyoshi, E-mail: kiyoshi.yagi@riken.jp; Otaki, Hiroki

    A perturbative extension to optimized coordinate vibrational self-consistent field (oc-VSCF) is proposed based on the quasi-degenerate perturbation theory (QDPT). A scheme to construct the degenerate space (P space) is developed, which incorporates degenerate configurations and alleviates the divergence of perturbative expansion due to localized coordinates in oc-VSCF (e.g., local O–H stretching modes of water). An efficient configuration selection scheme is also implemented, which screens out the Hamiltonian matrix element between the P space configuration (p) and the complementary Q space configuration (q) based on a difference in their quantum numbers (λ{sub pq} = ∑{sub s}|p{sub s} − q{sub s}|). Itmore » is demonstrated that the second-order vibrational QDPT based on optimized coordinates (oc-VQDPT2) smoothly converges with respect to the order of the mode coupling, and outperforms the conventional one based on normal coordinates. Furthermore, an improved, fast algorithm is developed for optimizing the coordinates. First, the minimization of the VSCF energy is conducted in a restricted parameter space, in which only a portion of pairs of coordinates is selectively transformed. A rational index is devised for this purpose, which identifies the important coordinate pairs to mix from others that may remain unchanged based on the magnitude of harmonic coupling induced by the transformation. Second, a cubic force field (CFF) is employed in place of a quartic force field, which bypasses intensive procedures that arise due to the presence of the fourth-order force constants. It is found that oc-VSCF based on CFF together with the pair selection scheme yields the coordinates similar in character to the conventional ones such that the final vibrational energy is affected very little while gaining an order of magnitude acceleration. The proposed method is applied to ethylene and trans-1,3-butadiene. An accurate, multi-resolution potential, which combines the MP2 and coupled-cluster with singles, doubles, and perturbative triples level of electronic structure theory, is generated and employed in the oc-VQDPT2 calculation to obtain the fundamental tones as well as selected overtones/combination tones coupled to the fundamentals through the Fermi resonance. The calculated frequencies of ethylene and trans-1,3-butadiene are found to be in excellent agreement with the experimental values with a mean absolute error of 8 and 9 cm{sup −1}, respectively.« less

  14. Regional management of farmland feeding geese using an ecological prioritization tool.

    PubMed

    Madsen, Jesper; Bjerrum, Morten; Tombre, Ingunn M

    2014-10-01

    Wild geese foraging on farmland cause increasing conflicts with agricultural interests, calling for a strategic approach to mitigation. In central Norway, conflicts between farmers and spring-staging pink-footed geese feeding on pastures have escalated. To alleviate the conflict, a scheme by which farmers are subsidized to allow geese to forage undisturbed was introduced. To guide allocation of subsidies, an ecological-based ranking of fields at a regional level was recommended and applied. Here we evaluate the scheme. On average, 40 % of subsidized fields were in the top 5 % of the ranking, and 80 % were within the top 20 %. Goose grazing pressure on subsidized pastures was 13 times higher compared to a stratified random selection of non-subsidized pastures, capturing 67 % of the pasture feeding geese despite that subsidized fields only comprised 13 % of the grassland area. Close dialogue between scientists and managers is regarded as a key to the success of the scheme.

  15. Enhancing deep convolutional neural network scheme for breast cancer diagnosis with unlabeled data.

    PubMed

    Sun, Wenqing; Tseng, Tzu-Liang Bill; Zhang, Jianying; Qian, Wei

    2017-04-01

    In this study we developed a graph based semi-supervised learning (SSL) scheme using deep convolutional neural network (CNN) for breast cancer diagnosis. CNN usually needs a large amount of labeled data for training and fine tuning the parameters, and our proposed scheme only requires a small portion of labeled data in training set. Four modules were included in the diagnosis system: data weighing, feature selection, dividing co-training data labeling, and CNN. 3158 region of interests (ROIs) with each containing a mass extracted from 1874 pairs of mammogram images were used for this study. Among them 100 ROIs were treated as labeled data while the rest were treated as unlabeled. The area under the curve (AUC) observed in our study was 0.8818, and the accuracy of CNN is 0.8243 using the mixed labeled and unlabeled data. Copyright © 2016. Published by Elsevier Ltd.

  16. A homonuclear spin-pair filter for solid-state NMR based on adiabatic-passage techniques

    NASA Astrophysics Data System (ADS)

    Verel, René; Baldus, Marc; Ernst, Matthias; Meier, Beat H.

    1998-05-01

    A filtering scheme for the selection of spin pairs (and larger spin clusters) under fast magic-angle spinning is proposed. The scheme exploits the avoided level crossing in spin pairs during an adiabatic amplitude sweep through the so-called HORROR recoupling condition. The advantages over presently used double-quantum filters are twofold. (i) The maximum theoretical filter efficiency is, due to the adiabatic variation, 100% instead of 73% as for transient methods. (ii) Since the filter does not rely on the phase-cycling properties of the double-quantum coherence, there is no need to obtain the full double-quantum intensity for all spins in the sample at one single point in time. The only important requirement is that all coupled spins pass through a two-spin state during the amplitude sweep. This makes the pulse scheme robust with respect to rf-amplitude missetting, rf-field inhomogeneity and chemical-shift offset.

  17. On the placement of active members in adaptive truss structures for vibration control

    NASA Technical Reports Server (NTRS)

    Lu, L.-Y.; Utku, S.; Wada, B. K.

    1992-01-01

    The problem of optimal placement of active members which are used for vibration control in adaptive truss structures is investigated. The control scheme is based on the method of eigenvalue assignment as a means of shaping the transient response of the controlled adaptive structures, and the minimization of required control action is considered as the optimization criterion. To this end, a performance index which measures the control strokes of active members is formulated in an efficient way. In order to reduce the computation burden, particularly for the case where the locations of active members have to be selected from a large set of available sites, several heuristic searching schemes are proposed for obtaining the near-optimal locations. The proposed schemes significantly reduce the computational complexity of placing multiple active members to the order of that when a single active member is placed.

  18. Security-Oriented and Load-Balancing Wireless Data Routing Game in the Integration of Advanced Metering Infrastructure Network in Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Fulin; Cao, Yang; Zhang, Jun Jason

    Ensuring flexible and reliable data routing is indispensable for the integration of Advanced Metering Infrastructure (AMI) networks, we propose a secure-oriented and load-balancing wireless data routing scheme. A novel utility function is designed based on security routing scheme. Then, we model the interactive security-oriented routing strategy among meter data concentrators or smart grid meters as a mixed-strategy network formation game. Finally, such problem results in a stable probabilistic routing scheme with proposed distributed learning algorithm. One contributions is that we studied that different types of applications affect the routing selection strategy and the strategy tendency. Another contributions is that themore » chosen strategy of our mixed routing can adaptively to converge to a new mixed strategy Nash equilibrium (MSNE) during the learning process in the smart grid.« less

  19. Well balancing of the SWE schemes for moving-water steady flows

    NASA Astrophysics Data System (ADS)

    Caleffi, Valerio; Valiani, Alessandro

    2017-08-01

    In this work, the exact reproduction of a moving-water steady flow via the numerical solution of the one-dimensional shallow water equations is studied. A new scheme based on a modified version of the HLLEM approximate Riemann solver (Dumbser and Balsara (2016) [18]) that exactly preserves the total head and the discharge in the simulation of smooth steady flows and that correctly dissipates mechanical energy in the presence of hydraulic jumps is presented. This model is compared with a selected set of schemes from the literature, including models that exactly preserve quiescent flows and models that exactly preserve moving-water steady flows. The comparison highlights the strengths and weaknesses of the different approaches. In particular, the results show that the increase in accuracy in the steady state reproduction is counterbalanced by a reduced robustness and numerical efficiency of the models. Some solutions to reduce these drawbacks, at the cost of increased algorithm complexity, are presented.

  20. A Learning Based Fiducial-driven Registration Scheme for Evaluating Laser Ablation Changes in Neurological Disorders.

    PubMed

    Wan, Tao; Bloch, B Nicolas; Danish, Shabbar; Madabhushi, Anant

    2014-11-20

    In this work, we present a novel learning based fiducial driven registration (LeFiR) scheme which utilizes a point matching technique to identify the optimal configuration of landmarks to better recover deformation between a target and a moving image. Moreover, we employ the LeFiR scheme to model the localized nature of deformation introduced by a new treatment modality - laser induced interstitial thermal therapy (LITT) for treating neurological disorders. Magnetic resonance (MR) guided LITT has recently emerged as a minimally invasive alternative to craniotomy for local treatment of brain diseases (such as glioblastoma multiforme (GBM), epilepsy). However, LITT is currently only practised as an investigational procedure world-wide due to lack of data on longer term patient outcome following LITT. There is thus a need to quantitatively evaluate treatment related changes between post- and pre-LITT in terms of MR imaging markers. In order to validate LeFiR, we tested the scheme on a synthetic brain dataset (SBD) and in two real clinical scenarios for treating GBM and epilepsy with LITT. Four experiments under different deformation profiles simulating localized ablation effects of LITT on MRI were conducted on 286 pairs of SBD images. The training landmark configurations were obtained through 2000 iterations of registration where the points with consistently best registration performance were selected. The estimated landmarks greatly improved the quality metrics compared to a uniform grid (UniG) placement scheme, a speeded-up robust features (SURF) based method, and a scale-invariant feature transform (SIFT) based method as well as a generic free-form deformation (FFD) approach. The LeFiR method achieved average 90% improvement in recovering the local deformation compared to 82% for the uniform grid placement, 62% for the SURF based approach, and 16% for the generic FFD approach. On the real GBM and epilepsy data, the quantitative results showed that LeFiR outperformed UniG by 28% improvement in average.

  1. SVM feature selection based rotation forest ensemble classifiers to improve computer-aided diagnosis of Parkinson disease.

    PubMed

    Ozcift, Akin

    2012-08-01

    Parkinson disease (PD) is an age-related deterioration of certain nerve systems, which affects movement, balance, and muscle control of clients. PD is one of the common diseases which affect 1% of people older than 60 years. A new classification scheme based on support vector machine (SVM) selected features to train rotation forest (RF) ensemble classifiers is presented for improving diagnosis of PD. The dataset contains records of voice measurements from 31 people, 23 with PD and each record in the dataset is defined with 22 features. The diagnosis model first makes use of a linear SVM to select ten most relevant features from 22. As a second step of the classification model, six different classifiers are trained with the subset of features. Subsequently, at the third step, the accuracies of classifiers are improved by the utilization of RF ensemble classification strategy. The results of the experiments are evaluated using three metrics; classification accuracy (ACC), Kappa Error (KE) and Area under the Receiver Operating Characteristic (ROC) Curve (AUC). Performance measures of two base classifiers, i.e. KStar and IBk, demonstrated an apparent increase in PD diagnosis accuracy compared to similar studies in literature. After all, application of RF ensemble classification scheme improved PD diagnosis in 5 of 6 classifiers significantly. We, numerically, obtained about 97% accuracy in RF ensemble of IBk (a K-Nearest Neighbor variant) algorithm, which is a quite high performance for Parkinson disease diagnosis.

  2. A picture's worth a thousand words: a food-selection observational method.

    PubMed

    Carins, Julia E; Rundle-Thiele, Sharyn R; Parkinson, Joy E

    2016-05-04

    Issue addressed: Methods are needed to accurately measure and describe behaviour so that social marketers and other behaviour change researchers can gain consumer insights before designing behaviour change strategies and so, in time, they can measure the impact of strategies or interventions when implemented. This paper describes a photographic method developed to meet these needs. Methods: Direct observation and photographic methods were developed and used to capture food-selection behaviour and examine those selections according to their healthfulness. Four meals (two lunches and two dinners) were observed at a workplace buffet-style cafeteria over a 1-week period. The healthfulness of individual meals was assessed using a classification scheme developed for the present study and based on the Australian Dietary Guidelines. Results: Approximately 27% of meals (n = 168) were photographed. Agreement was high between raters classifying dishes using the scheme, as well as between researchers when coding photographs. The subset of photographs was representative of patterns observed in the entire dining room. Diners chose main dishes in line with the proportions presented, but in opposition to the proportions presented for side dishes. Conclusions: The present study developed a rigorous observational method to investigate food choice behaviour. The comprehensive food classification scheme produced consistent classifications of foods. The photographic data collection method was found to be robust and accurate. Combining the two observation methods allows researchers and/or practitioners to accurately measure and interpret food selections. Consumer insights gained suggest that, in this setting, increasing the availability of green (healthful) offerings for main dishes would assist in improving healthfulness, whereas other strategies (e.g. promotion) may be needed for side dishes. So what?: Visual observation methods that accurately measure and interpret food-selection behaviour provide both insight for those developing healthy eating interventions and a means to evaluate the effect of implemented interventions on food selection.

  3. Detection of Listeria monocytogenes from selective enrichment broth using MALDI-TOF Mass Spectrometry.

    PubMed

    Jadhav, Snehal; Sevior, Danielle; Bhave, Mrinal; Palombo, Enzo A

    2014-01-31

    Conventional methods used for primary detection of Listeria monocytogenes from foods and subsequent confirmation of presumptive positive samples involve prolonged incubation and biochemical testing which generally require four to five days to obtain a result. In the current study, a simple and rapid proteomics-based MALDI-TOF MS approach was developed to detect L. monocytogenes directly from selective enrichment broths. Milk samples spiked with single species and multiple species cultures were incubated in a selective enrichment broth for 24h, followed by an additional 6h secondary enrichment. As few as 1 colony-forming unit (cfu) of L. monocytogenes per mL of initial selective broth culture could be detected within 30h. On applying the same approach to solid foods previously implicated in listeriosis, namely chicken pâté, cantaloupe and Camembert cheese, detection was achieved within the same time interval at inoculation levels of 10cfu/mL. Unlike the routine application of MALDI-TOF MS for identification of bacteria from solid media, this study proposes a cost-effective and time-saving detection scheme for direct identification of L. monocytogenes from broth cultures.This article is part of a Special Issue entitled: Trends in Microbial Proteomics. Globally, foodborne diseases are major causes of illness and fatalities in humans. Hence, there is a continual need for reliable and rapid means for pathogen detection from food samples. Recent applications of MALDI-TOF MS for diagnostic microbiology focused on detection of microbes from clinical specimens. However, the current study has emphasized its use as a tool for detecting the major foodborne pathogen, Listeria monocytogenes, directly from selective enrichment broths. This proof-of-concept study proposes a detection scheme that is more rapid and simple compared to conventional methods of Listeria detection. Very low levels of the pathogen could be identified from different food samples post-enrichment in selective enrichment broths. Use of this scheme will facilitate rapid and cost-effective testing for this important foodborne pathogen. © 2013.

  4. Computerized scheme for detection of diffuse lung diseases on CR chest images

    NASA Astrophysics Data System (ADS)

    Pereira, Roberto R., Jr.; Shiraishi, Junji; Li, Feng; Li, Qiang; Doi, Kunio

    2008-03-01

    We have developed a new computer-aided diagnostic (CAD) scheme for detection of diffuse lung disease in computed radiographic (CR) chest images. One hundred ninety-four chest images (56 normals and 138 abnormals with diffuse lung diseases) were used. The 138 abnormal cases were classified into three levels of severity (34 mild, 60 moderate, and 44 severe) by an experienced chest radiologist with use of five different patterns, i.e., reticular, reticulonodular, nodular, air-space opacity, and emphysema. In our computerized scheme, the first moment of the power spectrum, the root-mean-square variation, and the average pixel value were determined for each region of interest (ROI), which was selected automatically in the lung fields. The average pixel value and its dependence on the location of the ROI were employed for identifying abnormal patterns due to air-space opacity or emphysema. A rule-based method was used for determining three levels of abnormality for each ROI (0: normal, 1: mild, 2: moderate, and 3: severe). The distinction between normal lungs and abnormal lungs with diffuse lung disease was determined based on the fractional number of abnormal ROIs by taking into account the severity of abnormalities. Preliminary results indicated that the area under the ROC curve was 0.889 for the 44 severe cases, 0.825 for the 104 severe and moderate cases, and 0.794 for all cases. We have identified a number of problems and reasons causing false positives on normal cases, and also false negatives on abnormal cases. In addition, we have discussed potential approaches for improvement of our CAD scheme. In conclusion, the CAD scheme for detection of diffuse lung diseases based on texture features extracted from CR chest images has the potential to assist radiologists in their interpretation of diffuse lung diseases.

  5. Optical scheme for simulating post-quantum nonlocality distillation.

    PubMed

    Chu, Wen-Jing; Yang, Ming; Pan, Guo-Zhu; Yang, Qing; Cao, Zhuo-Liang

    2016-11-28

    An optical scheme for simulating nonlocality distillation is proposed in post-quantum regime. The nonlocal boxes are simulated by measurements on appropriately pre- and post-selected polarization entangled photon pairs, i.e. post-quantum nonlocality is simulated by exploiting fair-sampling loophole in a Bell test. Mod 2 addition on the outputs of two nonlocal boxes combined with pre- and post-selection operations constitutes the key operation of simulating nonlocality distillation. This scheme provides a possible tool for the experimental study on the nonlocality in post-quantum regime and the exact physical principle precisely distinguishing physically realizable correlations from nonphysical ones.

  6. The Impact of Vocational Education on Poverty Reduction, Quality Assurance and Mobility on Regional Labour Markets--Selected EU-Funded Schemes

    ERIC Educational Resources Information Center

    Wallenborn, Manfred

    2009-01-01

    Vocational education can serve to promote social stability and sustainable economic and social development. The European Union (EU) strategically employs a range of vocational educational schemes to attain these overriding goals. Topical points of focus are selected in line with requirements in the individual partner countries or regions. However,…

  7. Individual Combatant’s Weapons Firing Algorithm

    DTIC Science & Technology

    2010-04-01

    target selection prioritization scheme, aim point, mode of fire, and estimates on Phit /Pmiss for a single SME. Also undertaken in this phase of the...5 APPENDIX A: SME FUZZY ESTIMATES ON FACTORS AND ESTIMATES ON PHIT /PMISS.....6...influencing the target selection prioritization scheme, aim point, mode of fire, and estimates on Phit /Pmiss for a single SME. Also undertaken in this

  8. Adaptive Packet Combining Scheme in Three State Channel Model

    NASA Astrophysics Data System (ADS)

    Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak

    2018-01-01

    The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.

  9. Identification of isomers and control of ionization and dissociation processes using dual-mass-spectrometer scheme and genetic algorithm optimization

    NASA Astrophysics Data System (ADS)

    Chen, Zhou; Tong, Qiu-Nan; Zhang, Cong-Cong; Hu, Zhan

    2015-04-01

    Identification of acetone and its two isomers, and the control of their ionization and dissociation processes are performed using a dual-mass-spectrometer scheme. The scheme employs two sets of time of flight mass spectrometers to simultaneously acquire the mass spectra of two different molecules under the irradiation of identically shaped femtosecond laser pulses. The optimal laser pulses are found using closed-loop learning method based on a genetic algorithm. Compared with the mass spectra of the two isomers that are obtained with the transform limited pulse, those obtained under the irradiation of the optimal laser pulse show large differences and the various reaction pathways of the two molecules are selectively controlled. The experimental results demonstrate that the scheme is quite effective and useful in studies of two molecules having common mass peaks, which makes a traditional single mass spectrometer unfeasible. Project supported by the National Basic Research Program of China (Grant No. 2013CB922200) and the National Natural Science Foundation of China (Grant No. 11374124).

  10. High capacity low delay packet broadcasting multiaccess schemes for satellite repeater systems

    NASA Astrophysics Data System (ADS)

    Bose, S. K.

    1980-12-01

    Demand assigned packet radio schemes using satellite repeaters can achieve high capacities but often exhibit relatively large delays under low traffic conditions when compared to random access. Several schemes which improve delay performance at low traffic but which have high capacity are presented and analyzed. These schemes allow random acess attempts by users, who are waiting for channel assignments. The performance of these are considered in the context of a multiple point communication system carrying fixed length messages between geographically distributed (ground) user terminals which are linked via a satellite repeater. Channel assignments are done following a BCC queueing discipline by a (ground) central controller on the basis of requests correctly received over a collision type access channel. In TBACR Scheme A, some of the forward message channels are set aside for random access transmissions; the rest are used in a demand assigned mode. Schemes B and C operate all their forward message channels in a demand assignment mode but, by means of appropriate algorithms for trailer channel selection, allow random access attempts on unassigned channels. The latter scheme also introduces framing and slotting of the time axis to implement a more efficient algorithm for trailer channel selection than the former.

  11. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less

  12. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhuang, Xiahai, E-mail: zhuangxiahai@sjtu.edu.cn; Qian, Xiaohua; Bai, Wenjia

    Purpose: Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluatingmore » the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Methods: Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors’ proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. Results: The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve with higher WHS Dice scores compared to the conventional schemes (p < 0.03). In the atlas database study, the authors showed that the MAS using larger atlas databases generated better performance curves than the MAS using smaller ones, indicating larger atlas databases could produce more accurate segmentation. Conclusions: The authors have developed a new MAS framework for automatic WHS of CTA and investigated alternative implementations of MAS. With the proposed atlas ranking algorithm and joint label fusion, the MAS scheme is able to generate accurate segmentation within practically acceptable computation time. This method can be useful for the development of new clinical applications of cardiac CT.« less

  13. A broadcast-based key agreement scheme using set reconciliation for wireless body area networks.

    PubMed

    Ali, Aftab; Khan, Farrukh Aslam

    2014-05-01

    Information and communication technologies have thrived over the last few years. Healthcare systems have also benefited from this progression. A wireless body area network (WBAN) consists of small, low-power sensors used to monitor human physiological values remotely, which enables physicians to remotely monitor the health of patients. Communication security in WBANs is essential because it involves human physiological data. Key agreement and authentication are the primary issues in the security of WBANs. To agree upon a common key, the nodes exchange information with each other using wireless communication. This information exchange process must be secure enough or the information exchange should be minimized to a certain level so that if information leak occurs, it does not affect the overall system. Most of the existing solutions for this problem exchange too much information for the sake of key agreement; getting this information is sufficient for an attacker to reproduce the key. Set reconciliation is a technique used to reconcile two similar sets held by two different hosts with minimal communication complexity. This paper presents a broadcast-based key agreement scheme using set reconciliation for secure communication in WBANs. The proposed scheme allows the neighboring nodes to agree upon a common key with the personal server (PS), generated from the electrocardiogram (EKG) feature set of the host body. Minimal information is exchanged in a broadcast manner, and even if every node is missing a different subset, by reconciling these feature sets, the whole network will still agree upon a single common key. Because of the limited information exchange, if an attacker gets the information in any way, he/she will not be able to reproduce the key. The proposed scheme mitigates replay, selective forwarding, and denial of service attacks using a challenge-response authentication mechanism. The simulation results show that the proposed scheme has a great deal of adoptability in terms of security, communication overhead, and running time complexity, as compared to the existing EKG-based key agreement scheme.

  14. An enhanced performance through agent-based secure approach for mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Bisen, Dhananjay; Sharma, Sanjeev

    2018-01-01

    This paper proposes an agent-based secure enhanced performance approach (AB-SEP) for mobile ad hoc network. In this approach, agent nodes are selected through optimal node reliability as a factor. This factor is calculated on the basis of node performance features such as degree difference, normalised distance value, energy level, mobility and optimal hello interval of node. After selection of agent nodes, a procedure of malicious behaviour detection is performed using fuzzy-based secure architecture (FBSA). To evaluate the performance of the proposed approach, comparative analysis is done with conventional schemes using performance parameters such as packet delivery ratio, throughput, total packet forwarding, network overhead, end-to-end delay and percentage of malicious detection.

  15. Quantum teleportation scheme by selecting one of multiple output ports

    NASA Astrophysics Data System (ADS)

    Ishizaka, Satoshi; Hiroshima, Tohya

    2009-04-01

    The scheme of quantum teleportation, where Bob has multiple (N) output ports and obtains the teleported state by simply selecting one of the N ports, is thoroughly studied. We consider both the deterministic version and probabilistic version of the teleportation scheme aiming to teleport an unknown state of a qubit. Moreover, we consider two cases for each version: (i) the state employed for the teleportation is fixed to a maximally entangled state and (ii) the state is also optimized as well as Alice’s measurement. We analytically determine the optimal protocols for all the four cases and show the corresponding optimal fidelity or optimal success probability. All these protocols can achieve the perfect teleportation in the asymptotic limit of N→∞ . The entanglement properties of the teleportation scheme are also discussed.

  16. Control of parallel manipulators using force feedback

    NASA Technical Reports Server (NTRS)

    Nanua, Prabjot

    1994-01-01

    Two control schemes are compared for parallel robotic mechanisms actuated by hydraulic cylinders. One scheme, the 'rate based scheme', uses the position and rate information only for feedback. The second scheme, the 'force based scheme' feeds back the force information also. The force control scheme is shown to improve the response over the rate control one. It is a simple constant gain control scheme better suited to parallel mechanisms. The force control scheme can be easily modified for the dynamic forces on the end effector. This paper presents the results of a computer simulation of both the rate and force control schemes. The gains in the force based scheme can be individually adjusted in all three directions, whereas the adjustment in just one direction of the rate based scheme directly affects the other two directions.

  17. Achieving universal health coverage through voluntary insurance: what can we learn from the experience of Lao PDR?

    PubMed Central

    2013-01-01

    Background The Government of Lao Peoples’ Democratic Republic (Lao PDR) has embarked on a path to achieve universal health coverage (UHC) through implementation of four risk-protection schemes. One of these schemes is community-based health insurance (CBHI) – a voluntary scheme that targets roughly half the population. However, after 12 years of implementation, coverage through CBHI remains very low. Increasing coverage of the scheme would require expansion to households in both villages where CBHI is currently operating, and new geographic areas. In this study we explore the prospects of both types of expansion by examining household and district level data. Methods Using a household survey based on a case-comparison design of 3000 households, we examine the determinants of enrolment at the household level in areas where the scheme is currently operating. We model the determinants of enrolment using a probit model and predicted probabilities. Findings from focus group discussions are used to explain the quantitative findings. To examine the prospects for geographic scale-up, we use secondary data to compare characteristics of districts with and without insurance, using a combination of univariate and multivariate analyses. The multivariate analysis is a probit model, which models the factors associated with roll-out of CBHI to the districts. Results The household findings show that enrolment is concentrated among the better off and that adverse selection is present in the scheme. The district level findings show that to date, the scheme has been implemented in the most affluent areas, in closest proximity to the district hospitals, and in areas where quality of care is relatively good. Conclusions The household-level findings indicate that the scheme suffers from poor risk-pooling, which threatens financial sustainability. The district-level findings call into question whether or not the Government of Laos can successfully expand to more remote, less affluent districts, with lower population density. We discuss the policy implications of the findings and specifically address whether CBHI can serve as a foundation for a national scheme, while exploring alternative approaches to reaching the informal sector in Laos and other countries attempting to achieve UHC. PMID:24344925

  18. The Politico-Economic Challenges of Ghana’s National Health Insurance Scheme Implementation

    PubMed Central

    Fusheini, Adam

    2016-01-01

    Background: National/social health insurance schemes have increasingly been seen in many low- and middle-income countries (LMICs) as a vehicle to universal health coverage (UHC) and a viable alternative funding mechanism for the health sector. Several countries, including Ghana, have thus introduced and implemented mandatory national health insurance schemes (NHIS) as part of reform efforts towards increasing access to health services. Ghana passed mandatory national health insurance (NHI) legislation (ACT 650) in 2003 and commenced nationwide implementation in 2004. Several peer review studies and other research reports have since assessed the performance of the scheme with positive rating while challenges also noted. This paper contributes to the literature on economic and political implementation challenges based on empirical evidence from the perspectives of the different category of actors and institutions involved in the process. Methods: Qualitative in-depth interviews were held with 33 different category of participants in four selected district mutual health insurance schemes in Southern (two) and Northern (two) Ghana. This was to ascertain their views regarding the main challenges in the implementation process. The participants were selected through purposeful sampling, stakeholder mapping, and snowballing. Data was analysed using thematic grouping procedure. Results: Participants identified political issues of over politicisation and political interference as main challenges. The main economic issues participants identified included low premiums or contributions; broad exemptions, poor gatekeeper enforcement system; and culture of curative and hospital-centric care. Conclusion: The study establishes that political and economic factors have influenced the implementation process and the degree to which the policy has been implemented as intended. Thus, we conclude that there is a synergy between implementation and politics; and achieving UHC under the NHIS requires political stewardship. Political leadership has the responsibility to build trust and confidence in the system by providing the necessary resources and backing with minimal interference in the operations. For sustainability of the scheme, authorities need to review the exemption policy, rate of contributions, especially, from informal sector employees and recruitment criteria of scheme workers, explore additional sources of funding and re-examine training needs of employees to strengthen their competences among others. PMID:27694681

  19. The Politico-Economic Challenges of Ghana's National Health Insurance Scheme Implementation.

    PubMed

    Fusheini, Adam

    2016-04-27

    National/social health insurance schemes have increasingly been seen in many low- and middle-income countries (LMICs) as a vehicle to universal health coverage (UHC) and a viable alternative funding mechanism for the health sector. Several countries, including Ghana, have thus introduced and implemented mandatory national health insurance schemes (NHIS) as part of reform efforts towards increasing access to health services. Ghana passed mandatory national health insurance (NHI) legislation (ACT 650) in 2003 and commenced nationwide implementation in 2004. Several peer review studies and other research reports have since assessed the performance of the scheme with positive rating while challenges also noted. This paper contributes to the literature on economic and political implementation challenges based on empirical evidence from the perspectives of the different category of actors and institutions involved in the process. Qualitative in-depth interviews were held with 33 different category of participants in four selected district mutual health insurance schemes in Southern (two) and Northern (two) Ghana. This was to ascertain their views regarding the main challenges in the implementation process. The participants were selected through purposeful sampling, stakeholder mapping, and snowballing. Data was analysed using thematic grouping procedure. Participants identified political issues of over politicisation and political interference as main challenges. The main economic issues participants identified included low premiums or contributions; broad exemptions, poor gatekeeper enforcement system; and culture of curative and hospital-centric care. The study establishes that political and economic factors have influenced the implementation process and the degree to which the policy has been implemented as intended. Thus, we conclude that there is a synergy between implementation and politics; and achieving UHC under the NHIS requires political stewardship. Political leadership has the responsibility to build trust and confidence in the system by providing the necessary resources and backing with minimal interference in the operations. For sustainability of the scheme, authorities need to review the exemption policy, rate of contributions, especially, from informal sector employees and recruitment criteria of scheme workers, explore additional sources of funding and re-examine training needs of employees to strengthen their competences among others. © 2016 by Kerman University of Medical Sciences

  20. A new phosphate-selective sorbent for the Rem Nut process. Laboratory investigation and field experience at a medium size wastewater treatment plant.

    PubMed

    Petruzzelli, D; De Florio, L; Dell'Erba, A; Liberti, L; Notarnicola, M; Sengupta, A K

    2003-01-01

    P-control technologies for municipal wastewater are essentially based on "destructive" methods, that lead to formation of concentrated solid-phases (sludge), usually disposed-off in controlled landfills. Ion exchange, as a "non-destructive" technology, allows for selective removal and simultaneous recovery of pollutants, which can be recycled to the same and/or related productive lines. In this context, the REM NUT process removes nutrient species (HPO4 = , NH4+, K+) present in biologically oxidised municipal effluents and recovers them in the form of struvites (MgNH4PO4; MgKPO4), premium quality slow release fertilisers. The main limitation to the extensive application of this ion exchange based process is the non-availability of selective exchangers for specific removal of nutrient species. This paper illustrates laboratory investigation and pilot scale development of a so-called "P-driven" modified REM NUT scheme based on a new phosphate-selective sorbent developed at Lehigh University, PA, USA.

  1. DRDT: distributed and reliable data transmission with cooperative nodes for lossy wireless sensor networks.

    PubMed

    Seo, Jaewan; Kim, Moonseong; Hur, In; Choi, Wook; Choo, Hyunseung

    2010-01-01

    Recent studies have shown that in realistic wireless sensor network environments links are extremely unreliable. To recover from corrupted packets, most routing schemes with an assumption of ideal radio environments use a retransmission mechanism, which may cause unnecessary retransmissions. Therefore, guaranteeing energy-efficient reliable data transmission is a fundamental routing issue in wireless sensor networks. However, it is not encouraged to propose a new reliable routing scheme in the sense that every existing routing scheme cannot be replaced with the new one. This paper proposes a Distributed and Reliable Data Transmission (DRDT) scheme with a goal to efficiently guarantee reliable data transmission. In particular, this is based on a pluggable modular approach so that it can be extended to existing routing schemes. DRDT offers reliable data transmission using neighbor nodes, i.e., helper nodes. A helper node is selected among the neighbor nodes of the receiver node which overhear the data packet in a distributed manner. DRDT effectively reduces the number of retransmissions by delegating the retransmission task from the sender node to the helper node that has higher link quality to the receiver node when the data packet reception fails due to the low link quality between the sender and the receiver nodes. Comprehensive simulation results show that DRDT improves end-to-end transmission cost by up to about 45% and reduces its delay by about 40% compared to existing schemes.

  2. Biomolecular structure manipulation using tailored electromagnetic radiation: a proof of concept on a simplified model of the active site of bacterial DNA topoisomerase.

    PubMed

    Jarukanont, Daungruthai; Coimbra, João T S; Bauerhenne, Bernd; Fernandes, Pedro A; Patel, Shekhar; Ramos, Maria J; Garcia, Martin E

    2014-10-21

    We report on the viability of breaking selected bonds in biological systems using tailored electromagnetic radiation. We first demonstrate, by performing large-scale simulations, that pulsed electric fields cannot produce selective bond breaking. Then, we present a theoretical framework for describing selective energy concentration on particular bonds of biomolecules upon application of tailored electromagnetic radiation. The theory is based on the mapping of biomolecules to a set of coupled harmonic oscillators and on optimal control schemes to describe optimization of temporal shape, the phase and polarization of the external radiation. We have applied this theory to demonstrate the possibility of selective bond breaking in the active site of bacterial DNA topoisomerase. For this purpose, we have focused on a model that was built based on a case study. Results are given as a proof of concept.

  3. Fraction number of trapped atoms and velocity distribution function in sub-recoil laser cooling scheme

    NASA Astrophysics Data System (ADS)

    Alekseev, V. A.; Krylova, D. D.

    1996-02-01

    The analytical investigation of Bloch equations is used to describe the main features of the 1D velocity selective coherent population trapping cooling scheme. For the initial stage of cooling the fraction of cooled atoms is derived in the case of a Gaussian initial velocity distribution. At very long times of interaction the fraction of cooled atoms and the velocity distribution function are described by simple analytical formulae and do not depend on the initial distribution. These results are in good agreement with those of Bardou, Bouchaud, Emile, Aspect and Cohen-Tannoudji based on statistical analysis in terms of Levy flights and with Monte-Carlo simulations of the process.

  4. The PLUTO code for astrophysical gasdynamics .

    NASA Astrophysics Data System (ADS)

    Mignone, A.

    Present numerical codes appeal to a consolidated theory based on finite difference and Godunov-type schemes. In this context we have developed a versatile numerical code, PLUTO, suitable for the solution of high-mach number flow in 1, 2 and 3 spatial dimensions and different systems of coordinates. Different hydrodynamic modules and algorithms may be independently selected to properly describe Newtonian, relativistic, MHD, or relativistic MHD fluids. The modular structure exploits a general framework for integrating a system of conservation laws, built on modern Godunov-type shock-capturing schemes. The code is freely distributed under the GNU public license and it is available for download to the astrophysical community at the URL http://plutocode.to.astro.it.

  5. Interval Analysis Approach to Prototype the Robust Control of the Laboratory Overhead Crane

    NASA Astrophysics Data System (ADS)

    Smoczek, J.; Szpytko, J.; Hyla, P.

    2014-07-01

    The paper describes the software-hardware equipment and control-measurement solutions elaborated to prototype the laboratory scaled overhead crane control system. The novelty approach to crane dynamic system modelling and fuzzy robust control scheme design is presented. The iterative procedure for designing a fuzzy scheduling control scheme is developed based on the interval analysis of discrete-time closed-loop system characteristic polynomial coefficients in the presence of rope length and mass of a payload variation to select the minimum set of operating points corresponding to the midpoints of membership functions at which the linear controllers are determined through desired poles assignment. The experimental results obtained on the laboratory stand are presented.

  6. The effect of financial incentives on the quality of health care provided by primary care physicians.

    PubMed

    Scott, Anthony; Sivey, Peter; Ait Ouakrim, Driss; Willenberg, Lisa; Naccarella, Lucio; Furler, John; Young, Doris

    2011-09-07

    The use of blended payment schemes in primary care, including the use of financial incentives to directly reward 'performance' and 'quality' is increasing in a number of countries. There are many examples in the US, and the Quality and Outcomes Framework (QoF) for general practitioners (GPs) in the UK is an example of a major system-wide reform. Despite the popularity of these schemes, there is currently little rigorous evidence of their success in improving the quality of primary health care, or of whether such an approach is cost-effective relative to other ways to improve the quality of care. The aim of this review is to examine the effect of changes in the method and level of payment on the quality of care provided by primary care physicians (PCPs) and to identify:i) the different types of financial incentives that have improved quality;ii) the characteristics of patient populations for whom quality of care has been improved by financial incentives; andiii) the characteristics of PCPs who have responded to financial incentives. We searched the Cochrane Effective Practice and Organisation of Care (EPOC) Trials Register, Cochrane Central Register of Controlled Trials (CENTRAL) and Cochrane Database of Systematic Reviews (CDSR) (The Cochrane Library), MEDLINE, HealthSTAR, EMBASE, CINAHL, PsychLIT, and ECONLIT. Searches of Internet-based economics and health economics working paper collections were also conducted. Finally, studies were identified through the reference lists of retrieved articles, websites of key organisations, and from direct contact with key authors in the field. Articles were included if they were published from 2000 to August 2009. Randomised controlled trials (RCT), controlled before and after studies (CBA), and interrupted time series analyses (ITS) evaluating the impact of different financial interventions on the quality of care delivered by primary healthcare physicians (PCPs). Quality of care was defined as patient reported outcome measures, clinical behaviours, and intermediate clinical and physiological measures. Two review authors independently extracted data and assessed study quality, in consultation with two other review authors where there was disagreement. For each included study, we reported the estimated effect sizes and confidence intervals. Seven studies were included in this review. Three of the studies evaluated single-threshold target payments, one examined a fixed fee per patient achieving a specified outcome, one study evaluated payments based on the relative ranking of medical groups' performance (tournament-based pay), one study examined a mix of tournament-based pay and threshold payments, and one study evaluated changing from a blended payments scheme to salaried payment. Three cluster RCTs examined smoking cessation; one CBA examined patients' assessment of the quality of care; one CBA examined cervical screening, mammography screening, and HbA1c; one ITS focused on four outcomes in diabetes; and one controlled ITS (a difference-in-difference design) examined cervical screening, mammography screening, HbA1c, childhood immunisation, chlamydia screening, and appropriate asthma medication. Six of the seven studies showed positive but modest effects on quality of care for some primary outcome measures, but not all. One study found no effect on quality of care. Poor study design led to substantial risk of bias in most studies. In particular, none of the studies addressed issues of selection bias as a result of the ability of primary care physicians to select into or out of the incentive scheme or health plan. The use of financial incentives to reward PCPs for improving the quality of primary healthcare services is growing. However, there is insufficient evidence to support or not support the use of financial incentives to improve the quality of primary health care. Implementation should proceed with caution and incentive schemes should be more carefully designed before implementation. In addition to basing incentive design more on theory, there is a large literature discussing experiences with these schemes that can be used to draw out a number of lessons that can be learned and that could be used to influence or modify the design of incentive schemes. More rigorous study designs need to be used to account for the selection of physicians into incentive schemes. The use of instrumental variable techniques should be considered to assist with the identification of treatment effects in the presence of selection bias and other sources of unobserved heterogeneity. In randomised trials, care must be taken in using the correct unit of analysis and more attention should be paid to blinding. Studies should also examine the potential unintended consequences of incentive schemes by having a stronger theoretical basis, including a broader range of outcomes, and conducting more extensive subgroup analysis. Studies should more consistently describe i) the type of payment scheme at baseline or in the control group, ii) how payments to medical groups were used and distributed within the groups, and iii) the size of the new payments as a percentage of total revenue. Further research comparing the relative costs and effects of financial incentives with other behaviour change interventions is also required.

  7. Construction of an evaluation and selection system of emergency treatment technology based on dynamic fuzzy GRA method for phenol spill

    NASA Astrophysics Data System (ADS)

    Zhao, Jingjing; Yu, Lean; Li, Lian

    2017-05-01

    There is often a great deal of complexity, fuzziness and uncertainties of the chemical contingency spills. In order to obtain the optimum emergency disposal technology schemes as soon as the chemical pollution accident occurs, the technique evaluation system was developed based on dynamic fuzzy GRA method, and the feasibility of the proposed methods has been tested by using a emergency phenol spill accidence occurred in highway.

  8. Technical note: Improving the AWAT filter with interpolation schemes for advanced processing of high resolution data

    NASA Astrophysics Data System (ADS)

    Peters, Andre; Nehls, Thomas; Wessolek, Gerd

    2016-06-01

    Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.

  9. A rational interpolation method to compute frequency response

    NASA Technical Reports Server (NTRS)

    Kenney, Charles; Stubberud, Stephen; Laub, Alan J.

    1993-01-01

    A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.

  10. An efficient sparse matrix multiplication scheme for the CYBER 205 computer

    NASA Technical Reports Server (NTRS)

    Lambiotte, Jules J., Jr.

    1988-01-01

    This paper describes the development of an efficient algorithm for computing the product of a matrix and vector on a CYBER 205 vector computer. The desire to provide software which allows the user to choose between the often conflicting goals of minimizing central processing unit (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of four types of storage is selected for each diagonal. The candidate storage types employed were chosen to be efficient on the CYBER 205 for diagonals which have nonzero structure which is dense, moderately sparse, very sparse and short, or very sparse and long; however, for many densities, no diagonal type is most efficient with respect to both resource requirements, and a trade-off must be made. For each diagonal, an initialization subroutine estimates the CPU time and storage required for each storage type based on results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the two resources. The adjusted resource requirements are then compared to select the most efficient storage and computational scheme.

  11. Strategies for implementing genomic selection for feed efficiency in dairy cattle breeding schemes.

    PubMed

    Wallén, S E; Lillehammer, M; Meuwissen, T H E

    2017-08-01

    Alternative genomic selection and traditional BLUP breeding schemes were compared for the genetic improvement of feed efficiency in simulated Norwegian Red dairy cattle populations. The change in genetic gain over time and achievable selection accuracy were studied for milk yield and residual feed intake, as a measure of feed efficiency. When including feed efficiency in genomic BLUP schemes, it was possible to achieve high selection accuracies for genomic selection, and all genomic BLUP schemes gave better genetic gain for feed efficiency than BLUP using a pedigree relationship matrix. However, introducing a second trait in the breeding goal caused a reduction in the genetic gain for milk yield. When using contracted test herds with genotyped and feed efficiency recorded cows as a reference population, adding an additional 4,000 new heifers per year to the reference population gave accuracies that were comparable to a male reference population that used progeny testing with 250 daughters per sire. When the test herd consisted of 500 or 1,000 cows, lower genetic gain was found than using progeny test records to update the reference population. It was concluded that to improve difficult to record traits, the use of contracted test herds that had additional recording (e.g., measurements required to calculate feed efficiency) is a viable option, possibly through international collaborations. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. A Computational Geometry Approach to Automated Pulmonary Fissure Segmentation in CT Examinations

    PubMed Central

    Pu, Jiantao; Leader, Joseph K; Zheng, Bin; Knollmann, Friedrich; Fuhrman, Carl; Sciurba, Frank C; Gur, David

    2010-01-01

    Identification of pulmonary fissures, which form the boundaries between the lobes in the lungs, may be useful during clinical interpretation of CT examinations to assess the early presence and characterization of manifestation of several lung diseases. Motivated by the unique nature of the surface shape of pulmonary fissures in three-dimensional space, we developed a new automated scheme using computational geometry methods to detect and segment fissures depicted on CT images. After a geometric modeling of the lung volume using the Marching Cube Algorithm, Laplacian smoothing is applied iteratively to enhance pulmonary fissures by depressing non-fissure structures while smoothing the surfaces of lung fissures. Next, an Extended Gaussian Image based procedure is used to locate the fissures in a statistical manner that approximates the fissures using a set of plane “patches.” This approach has several advantages such as independence of anatomic knowledge of the lung structure except the surface shape of fissures, limited sensitivity to other lung structures, and ease of implementation. The scheme performance was evaluated by two experienced thoracic radiologists using a set of 100 images (slices) randomly selected from 10 screening CT examinations. In this preliminary evaluation 98.7% and 94.9% of scheme segmented fissure voxels are within 2 mm of the fissures marked independently by two radiologists in the testing image dataset. Using the scheme detected fissures as reference, 89.4% and 90.1% of manually marked fissure points have distance ≤ 2 mm to the reference suggesting a possible under-segmentation of the scheme. The case-based RMS (root-mean-square) distances (“errors”) between our scheme and the radiologist ranged from 1.48±0.92 to 2.04±3.88 mm. The discrepancy of fissure detection results between the automated scheme and either radiologist is smaller in this dataset than the inter-reader variability. PMID:19272987

  13. Modeling of genetic gain for single traits from marker-assisted seedling selection in clonally propagated crops

    PubMed Central

    Ru, Sushan; Hardner, Craig; Carter, Patrick A; Evans, Kate; Main, Dorrie; Peace, Cameron

    2016-01-01

    Seedling selection identifies superior seedlings as candidate cultivars based on predicted genetic potential for traits of interest. Traditionally, genetic potential is determined by phenotypic evaluation. With the availability of DNA tests for some agronomically important traits, breeders have the opportunity to include DNA information in their seedling selection operations—known as marker-assisted seedling selection. A major challenge in deploying marker-assisted seedling selection in clonally propagated crops is a lack of knowledge in genetic gain achievable from alternative strategies. Existing models based on additive effects considering seed-propagated crops are not directly relevant for seedling selection of clonally propagated crops, as clonal propagation captures all genetic effects, not just additive. This study modeled genetic gain from traditional and various marker-based seedling selection strategies on a single trait basis through analytical derivation and stochastic simulation, based on a generalized seedling selection scheme of clonally propagated crops. Various trait-test scenarios with a range of broad-sense heritability and proportion of genotypic variance explained by DNA markers were simulated for two populations with different segregation patterns. Both derived and simulated results indicated that marker-based strategies tended to achieve higher genetic gain than phenotypic seedling selection for a trait where the proportion of genotypic variance explained by marker information was greater than the broad-sense heritability. Results from this study provides guidance in optimizing genetic gain from seedling selection for single traits where DNA tests providing marker information are available. PMID:27148453

  14. Diode laser based resonance ionization mass spectrometric measurement of strontium-90

    NASA Astrophysics Data System (ADS)

    Bushaw, B. A.; Cannon, B. D.

    1997-10-01

    A diode laser based scheme for the isotopically selective excitation and ionization of strontium is presented. The double-resonance excitation 5s 21S 0→5s5p 3P 1→5s6s 3S 1 is followed by photoionization at 488 nm. The isotope shifts and hyperfine structure in the resonance transitions have been accurately measured for the stable isotopes and 90Sr, with the measurement of the 90Sr shifts using sub-pg samples. Analytical tests, using graphite crucible atomization, demonstrated 90Sr detection limits of 0.8 fg and overall (optical+mass spectrometer) isotopic selectivity of >10 10 against stable strontium.

  15. Simple aerosol correction technique based on the spectral relationships of the aerosol multiple-scattering reflectances for atmospheric correction over the oceans.

    PubMed

    Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram

    2016-12-26

    An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.

  16. Projecting future precipitation and temperature at sites with diverse climate through multiple statistical downscaling schemes

    NASA Astrophysics Data System (ADS)

    Vallam, P.; Qin, X. S.

    2017-10-01

    Anthropogenic-driven climate change would affect the global ecosystem and is becoming a world-wide concern. Numerous studies have been undertaken to determine the future trends of meteorological variables at different scales. Despite these studies, there remains significant uncertainty in the prediction of future climates. To examine the uncertainty arising from using different schemes to downscale the meteorological variables for the future horizons, projections from different statistical downscaling schemes were examined. These schemes included statistical downscaling method (SDSM), change factor incorporated with LARS-WG, and bias corrected disaggregation (BCD) method. Global circulation models (GCMs) based on CMIP3 (HadCM3) and CMIP5 (CanESM2) were utilized to perturb the changes in the future climate. Five study sites (i.e., Alice Springs, Edmonton, Frankfurt, Miami, and Singapore) with diverse climatic conditions were chosen for examining the spatial variability of applying various statistical downscaling schemes. The study results indicated that the regions experiencing heavy precipitation intensities were most likely to demonstrate the divergence between the predictions from various statistical downscaling methods. Also, the variance computed in projecting the weather extremes indicated the uncertainty derived from selection of downscaling tools and climate models. This study could help gain an improved understanding about the features of different downscaling approaches and the overall downscaling uncertainty.

  17. Simulation of selected genealogies.

    PubMed

    Slade, P F

    2000-02-01

    Algorithms for generating genealogies with selection conditional on the sample configuration of n genes in one-locus, two-allele haploid and diploid models are presented. Enhanced integro-recursions using the ancestral selection graph, introduced by S. M. Krone and C. Neuhauser (1997, Theor. Popul. Biol. 51, 210-237), which is the non-neutral analogue of the coalescent, enables accessible simulation of the embedded genealogy. A Monte Carlo simulation scheme based on that of R. C. Griffiths and S. Tavaré (1996, Math. Comput. Modelling 23, 141-158), is adopted to consider the estimation of ancestral times under selection. Simulations show that selection alters the expected depth of the conditional ancestral trees, depending on a mutation-selection balance. As a consequence, branch lengths are shown to be an ineffective criterion for detecting the presence of selection. Several examples are given which quantify the effects of selection on the conditional expected time to the most recent common ancestor. Copyright 2000 Academic Press.

  18. Study on test of coal co-firing for 600MW ultra supercritical boiler with four walls tangential burning

    NASA Astrophysics Data System (ADS)

    Ying, Wu; Yong-lu, Zhong; Guo-mingi, Yin

    2018-06-01

    On account of nine commonly used coals in a Jiangxi Power Plant,two kinds of coal were selected to be applied in coal co-firing test through industrial analysis,elementary analysis and thermogravimetric analysis of coal.During the coal co-firing test,two load points were selected,three coal mixtures were prepared.Moreover,under each coal blending scheme, the optimal oxygen content was obtained by oxygen varying test. At last,by measuring the boiler efficiency and coal consumption of power supply in different coal co-firing schemes, the recommended coal co-firing scheme was obtained.

  19. QKD using polarization encoding with active measurement basis selection

    NASA Astrophysics Data System (ADS)

    Duplinskiy, A.; Ustimchik, V.; Kanapin, A.; Kurochkin, Y.

    2017-11-01

    We report a proof-of-principle quantum key distribution experiment using a one-way optical scheme with polarization encoding implementing the BB84 protocol. LiNbO3 phase modulators are used for generating polarization states for Alice and active basis selection for Bob. This allows the former to use a single laser source, while the latter needs only two single-photon detectors. The presented optical scheme is simple and consists of standard fiber components. Calibration algorithm for three polarization controllers used in the scheme has been developed. The experiment was carried with 10 MHz repetition frequency laser pulses over a distance of 50 km of standard telecom optical fiber.

  20. A data base and analysis program for shuttle main engine dynamic pressure measurements. Appendix B: Data base plots for SSME tests 901-290 through 901-414

    NASA Technical Reports Server (NTRS)

    Coffin, T.

    1986-01-01

    A dynamic pressure data base and data base management system developed to characterize the Space Shuttle Main Engine (SSME) dynamic pressure environment is described. The data base represents dynamic pressure measurements obtained during single engine hot firing tesets of the SSME. Software is provided to permit statistical evaluation of selected measurements under specified operating conditions. An interpolation scheme is also included to estimate spectral trends with SSME power level. Flow dynamic environments in high performance rocket engines are discussed.

  1. A data base and analysis program for shuttle main engine dynamic pressure measurements. Appendix C: Data base plots for SSME tests 902-214 through 902-314

    NASA Technical Reports Server (NTRS)

    Coffin, T.

    1986-01-01

    A dynamic pressure data base and data base management system developed to characterize the Space Shuttle Main Engine (SSME) dynamic pressure environment is reported. The data base represents dynamic pressure measurements obtained during single engine hot firing tests of the SSME. Software is provided to permit statistical evaluation of selected measurements under specified operating conditions. An interpolation scheme is included to estimate spectral trends with SSME power level. Flow Dynamic Environments in High Performance Rocket Engines are described.

  2. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme

    PubMed Central

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873

  3. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme.

    PubMed

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI.

  4. Neuroscience-inspired computational systems for speech recognition under noisy conditions

    NASA Astrophysics Data System (ADS)

    Schafer, Phillip B.

    Humans routinely recognize speech in challenging acoustic environments with background music, engine sounds, competing talkers, and other acoustic noise. However, today's automatic speech recognition (ASR) systems perform poorly in such environments. In this dissertation, I present novel methods for ASR designed to approach human-level performance by emulating the brain's processing of sounds. I exploit recent advances in auditory neuroscience to compute neuron-based representations of speech, and design novel methods for decoding these representations to produce word transcriptions. I begin by considering speech representations modeled on the spectrotemporal receptive fields of auditory neurons. These representations can be tuned to optimize a variety of objective functions, which characterize the response properties of a neural population. I propose an objective function that explicitly optimizes the noise invariance of the neural responses, and find that it gives improved performance on an ASR task in noise compared to other objectives. The method as a whole, however, fails to significantly close the performance gap with humans. I next consider speech representations that make use of spiking model neurons. The neurons in this method are feature detectors that selectively respond to spectrotemporal patterns within short time windows in speech. I consider a number of methods for training the response properties of the neurons. In particular, I present a method using linear support vector machines (SVMs) and show that this method produces spikes that are robust to additive noise. I compute the spectrotemporal receptive fields of the neurons for comparison with previous physiological results. To decode the spike-based speech representations, I propose two methods designed to work on isolated word recordings. The first method uses a classical ASR technique based on the hidden Markov model. The second method is a novel template-based recognition scheme that takes advantage of the neural representation's invariance in noise. The scheme centers on a speech similarity measure based on the longest common subsequence between spike sequences. The combined encoding and decoding scheme outperforms a benchmark system in extremely noisy acoustic conditions. Finally, I consider methods for decoding spike representations of continuous speech. To help guide the alignment of templates to words, I design a syllable detection scheme that robustly marks the locations of syllabic nuclei. The scheme combines SVM-based training with a peak selection algorithm designed to improve noise tolerance. By incorporating syllable information into the ASR system, I obtain strong recognition results in noisy conditions, although the performance in noiseless conditions is below the state of the art. The work presented here constitutes a novel approach to the problem of ASR that can be applied in the many challenging acoustic environments in which we use computer technologies today. The proposed spike-based processing methods can potentially be exploited in effcient hardware implementations and could significantly reduce the computational costs of ASR. The work also provides a framework for understanding the advantages of spike-based acoustic coding in the human brain.

  5. Path scheduling for multiple mobile actors in wireless sensor network

    NASA Astrophysics Data System (ADS)

    Trapasiya, Samir D.; Soni, Himanshu B.

    2017-05-01

    In wireless sensor network (WSN), energy is the main constraint. In this work we have addressed this issue for single as well as multiple mobile sensor actor network. In this work, we have proposed Rendezvous Point Selection Scheme (RPSS) in which Rendezvous Nodes are selected by set covering problem approach and from that, Rendezvous Points are selected in a way to reduce the tour length. The mobile actors tour is scheduled to pass through those Rendezvous Points as per Travelling Salesman Problem (TSP). We have also proposed novel rendezvous node rotation scheme for fair utilisation of all the nodes. We have compared RPSS with Stationery Actor scheme as well as RD-VT, RD-VT-SMT and WRP-SMT for performance metrics like energy consumption, network lifetime, route length and found the better outcome in all the cases for single actor. We have also applied RPSS for multiple mobile actor case like Multi-Actor Single Depot (MASD) termination and Multi-Actor Multiple Depot (MAMD) termination and observed by extensive simulation that MAMD saves the network energy in optimised way and enhance network lifetime compared to all other schemes.

  6. Limited utility of residue masking for positive-selection inference.

    PubMed

    Spielman, Stephanie J; Dawson, Eric T; Wilke, Claus O

    2014-09-01

    Errors in multiple sequence alignments (MSAs) can reduce accuracy in positive-selection inference. Therefore, it has been suggested to filter MSAs before conducting further analyses. One widely used filter, Guidance, allows users to remove MSA positions aligned with low confidence. However, Guidance's utility in positive-selection inference has been disputed in the literature. We have conducted an extensive simulation-based study to characterize fully how Guidance impacts positive-selection inference, specifically for protein-coding sequences of realistic divergence levels. We also investigated whether novel scoring algorithms, which phylogenetically corrected confidence scores, and a new gap-penalization score-normalization scheme improved Guidance's performance. We found that no filter, including original Guidance, consistently benefitted positive-selection inferences. Moreover, all improvements detected were exceedingly minimal, and in certain circumstances, Guidance-based filters worsened inferences. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Relay Selection for Cooperative Relaying in Wireless Energy Harvesting Networks

    NASA Astrophysics Data System (ADS)

    Zhu, Kaiyan; Wang, Fei; Li, Songsong; Jiang, Fengjiao; Cao, Lijie

    2018-01-01

    Energy harvesting from the surroundings is a promising solution to provide energy supply and extend the life of wireless sensor networks. Recently, energy harvesting has been shown as an attractive solution to prolong the operation of cooperative networks. In this paper, we propose a relay selection scheme to optimize the amplify-and-forward (AF) cooperative transmission in wireless energy harvesting cooperative networks. The harvesting energy and channel conditions are considered to select the optimal relay as cooperative relay to minimize the outage probability of the system. Simulation results show that our proposed relay selection scheme achieves better outage performance than other strategies.

  8. Evolving effective behaviours to interact with tag-based populations

    NASA Astrophysics Data System (ADS)

    Yucel, Osman; Crawford, Chad; Sen, Sandip

    2015-07-01

    Tags and other characteristics, externally perceptible features that are consistent among groups of animals or humans, can be used by others to determine appropriate response strategies in societies. This usage of tags can be extended to artificial environments, where agents can significantly reduce cognitive effort spent on appropriate strategy choice and behaviour selection by reusing strategies for interacting with new partners based on their tags. Strategy selection mechanisms developed based on this idea have successfully evolved stable cooperation in games such as the Prisoner's Dilemma game but relies upon payoff sharing and matching methods that limit the applicability of the tag framework. Our goal is to develop a general classification and behaviour selection approach based on the tag framework. We propose and evaluate alternative tag matching and adaptation schemes for a new, incoming individual to select appropriate behaviour against any population member of an existing, stable society. Our proposed approach allows agents to evolve both the optimal tag for the environment as well as appropriate strategies for existing agent groups. We show that these mechanisms will allow for robust selection of optimal strategies by agents entering a stable society and analyse the various environments where this approach is effective.

  9. Fuzzy adaptive strong tracking scaled unscented Kalman filter for initial alignment of large misalignment angles

    NASA Astrophysics Data System (ADS)

    Li, Jing; Song, Ningfang; Yang, Gongliu; Jiang, Rui

    2016-07-01

    In the initial alignment process of strapdown inertial navigation system (SINS), large misalignment angles always bring nonlinear problem, which can usually be processed using the scaled unscented Kalman filter (SUKF). In this paper, the problem of large misalignment angles in SINS alignment is further investigated, and the strong tracking scaled unscented Kalman filter (STSUKF) is proposed with fixed parameters to improve convergence speed, while these parameters are artificially constructed and uncertain in real application. To further improve the alignment stability and reduce the parameters selection, this paper proposes a fuzzy adaptive strategy combined with STSUKF (FUZZY-STSUKF). As a result, initial alignment scheme of large misalignment angles based on FUZZY-STSUKF is designed and verified by simulations and turntable experiment. The results show that the scheme improves the accuracy and convergence speed of SINS initial alignment compared with those based on SUKF and STSUKF.

  10. Distributed fiber-optic laser-ultrasound generation based on ghost-mode of tilted fiber Bragg gratings.

    PubMed

    Tian, Jiajun; Zhang, Qi; Han, Ming

    2013-03-11

    Active ultrasonic testing is widely used for medical diagnosis, material characterization and structural health monitoring. Ultrasonic transducer is a key component in active ultrasonic testing. Due to their many advantages such as small size, light weight, and immunity to electromagnetic interference, fiber-optic ultrasonic transducers are particularly attractive for permanent, embedded applications in active ultrasonic testing for structural health monitoring. However, current fiber-optic transducers only allow effective ultrasound generation at a single location of the fiber end. Here we demonstrate a fiber-optic device that can effectively generate ultrasound at multiple, selected locations along a fiber in a controllable manner based on a smart light tapping scheme that only taps out the light of a particular wavelength for laser-ultrasound generation and allow light of longer wavelengths pass by without loss. Such a scheme may also find applications in remote fiber-optic device tuning and quasi-distributed biochemical fiber-optic sensing.

  11. Analytical minimization of synchronicity errors in stochastic identification

    NASA Astrophysics Data System (ADS)

    Bernal, D.

    2018-01-01

    An approach to minimize error due to synchronicity faults in stochastic system identification is presented. The scheme is based on shifting the time domain signals so the phases of the fundamental eigenvector estimated from the spectral density are zero. A threshold on the mean of the amplitude-weighted absolute value of these phases, above which signal shifting is deemed justified, is derived and found to be proportional to the first mode damping ratio. It is shown that synchronicity faults do not map precisely to phasor multiplications in subspace identification and that the accuracy of spectral density estimated eigenvectors, for inputs with arbitrary spectral density, decrease with increasing mode number. Selection of a corrective strategy based on signal alignment, instead of eigenvector adjustment using phasors, is shown to be the product of the foregoing observations. Simulations that include noise and non-classical damping suggest that the scheme can provide sufficient accuracy to be of practical value.

  12. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    PubMed Central

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models. PMID:26890307

  13. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness.

    PubMed

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models.

  14. Invited review: Current state of genetic improvement in dairy sheep.

    PubMed

    Carta, A; Casu, Sara; Salaris, S

    2009-12-01

    Dairy sheep have been farmed traditionally in the Mediterranean basin in southern Europe, central Europe, eastern Europe, and in Near East countries. Currently, dairy sheep farming systems vary from extensive to intensive according to the economic relevance of the production chain and the specific environment and breed. Modern breeding programs were conceived in the 1960s. The most efficient selection scheme for local dairy sheep breeds is based on pyramidal management of the population with the breeders of nucleus flocks at the top, where pedigree and official milk recording, artificial insemination, controlled natural mating, and breeding value estimation are carried out to generate genetic progress. The genetic progress is then transferred to the commercial flocks through artificial insemination or natural-mating rams. Increasing milk yield is still the most profitable breeding objective for several breeds. Almost all milk is used for cheese production and, consequently, milk content traits are very important. Moreover, other traits are gaining interest for selection: machine milking ability and udder morphology, resistance to diseases (mastitis, internal parasites, scrapie), and traits related to the nutritional value of milk (fatty acid composition). Current breeding programs based on the traditional quantitative approach have achieved appreciable genetic gains for milk yield. In many cases, further selection goals such as milk composition, udder morphology, somatic cell count, and scrapie resistance have been implemented. However, the possibility of including other traits of selective interest is limited by high recording costs. Also, the organizational effort needed to apply the traditional quantitative approach limits the diffusion of current selection programs outside the European Mediterranean area. In this context, the application of selection schemes assisted by molecular information, to improve either traditional dairy traits or traits costly to record, seems to be attractive in dairy sheep. At the moment, the most effective strategy seems to be the strengthening of research projects aimed at finding causal mutations along the genes affecting traits of economic importance. However, genome-wide selection seems to be unfeasible in most dairy sheep breeds.

  15. Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.

    PubMed

    Yang, Shengxiang

    2008-01-01

    In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.

  16. Optimizing the design of small-sized nucleus breeding programs for dairy cattle with minimal performance recording.

    PubMed

    Kariuki, C M; Komen, H; Kahi, A K; van Arendonk, J A M

    2014-12-01

    Dairy cattle breeding programs in developing countries are constrained by minimal and erratic pedigree and performance recording on cows on commercial farms. Small-sized nucleus breeding programs offer a viable alternative. Deterministic simulations using selection index theory were performed to determine the optimum design for small-sized nucleus schemes for dairy cattle. The nucleus was made up of 197 bulls and 243 cows distributed in 8 non-overlapping age classes. Each year 10 sires and 100 dams were selected to produce the next generation of male and female selection candidates. Conception rates and sex ratio were fixed at 0.90 and 0.50, respectively, translating to 45 male and 45 female candidates joining the nucleus per year. Commercial recorded dams provided information for genetic evaluation of selection candidates (bulls) in the nucleus. Five strategies were defined: nucleus records only [within-nucleus dam performance (DP)], progeny records in addition to nucleus records [progeny testing (PT)], genomic information only [genomic selection (GS)], dam performance records in addition to genomic information (GS+DP), and progeny records in addition to genomic information (GS+PT). Alternative PT, GS, GS+DP, and GS+PT schemes differed in the number of progeny per sire and size of reference population. The maximum number of progeny records per sire was 30, and the maximum size of the reference population was 5,000. Results show that GS schemes had higher responses and lower accuracies compared with other strategies, with the higher response being due to shorter generation intervals. Compared with similar sized progeny-testing schemes, genomic-selection schemes would have lower accuracies but these are offset by higher responses per year, which might provide additional incentive for farmers to participate in recording. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Intelligent control of non-linear dynamical system based on the adaptive neurocontroller

    NASA Astrophysics Data System (ADS)

    Engel, E.; Kovalev, I. V.; Kobezhicov, V.

    2015-10-01

    This paper presents an adaptive neuro-controller for intelligent control of non-linear dynamical system. The formed as the fuzzy selective neural net the adaptive neuro-controller on the base of system's state, creates the effective control signal under random perturbations. The validity and advantages of the proposed adaptive neuro-controller are demonstrated by numerical simulations. The simulation results show that the proposed controller scheme achieves real-time control speed and the competitive performance, as compared to PID, fuzzy logic controllers.

  18. The effects of sample scheduling and sample numbers on estimates of the annual fluxes of suspended sediment in fluvial systems

    USGS Publications Warehouse

    Horowitz, Arthur J.; Clarke, Robin T.; Merten, Gustavo Henrique

    2015-01-01

    Since the 1970s, there has been both continuing and growing interest in developing accurate estimates of the annual fluvial transport (fluxes and loads) of suspended sediment and sediment-associated chemical constituents. This study provides an evaluation of the effects of manual sample numbers (from 4 to 12 year−1) and sample scheduling (random-based, calendar-based and hydrology-based) on the precision, bias and accuracy of annual suspended sediment flux estimates. The evaluation is based on data from selected US Geological Survey daily suspended sediment stations in the USA and covers basins ranging in area from just over 900 km2 to nearly 2 million km2 and annual suspended sediment fluxes ranging from about 4 Kt year−1 to about 200 Mt year−1. The results appear to indicate that there is a scale effect for random-based and calendar-based sampling schemes, with larger sample numbers required as basin size decreases. All the sampling schemes evaluated display some level of positive (overestimates) or negative (underestimates) bias. The study further indicates that hydrology-based sampling schemes are likely to generate the most accurate annual suspended sediment flux estimates with the fewest number of samples, regardless of basin size. This type of scheme seems most appropriate when the determination of suspended sediment concentrations, sediment-associated chemical concentrations, annual suspended sediment and annual suspended sediment-associated chemical fluxes only represent a few of the parameters of interest in multidisciplinary, multiparameter monitoring programmes. The results are just as applicable to the calibration of autosamplers/suspended sediment surrogates currently used to measure/estimate suspended sediment concentrations and ultimately, annual suspended sediment fluxes, because manual samples are required to adjust the sample data/measurements generated by these techniques so that they provide depth-integrated and cross-sectionally representative data. 

  19. Multispectral studies of selected crater- and basin-filling lunar Maria from Galileo Earth-Moon encounter 1

    NASA Technical Reports Server (NTRS)

    Williams, D. A.; Greeley, R.; Neukum, G.; Wagner, R.

    1993-01-01

    New visible and near-infrared multispectral data of the Moon were obtained by the Galileo spacecraft in December, 1990. These data were calibrated with Earth-based spectral observations of the nearside to compare compositional information to previously uncharacterized mare basalts filling craters and basins on the western near side and eastern far side. A Galileo-based spectral classification scheme, modified from the Earth-based scheme developed by Pieters, designates the different spectral classifications of mare basalt observed using the 0.41/0.56 micron reflectance ratio (titanium content), 0.56 micron reflectance values (albedo), and 0.76/0.99 micron reflectance ratio (absorption due to Fe(2+) in mafic minerals and glass). In addition, age determinations from crater counts and results of a linear spectral mixing model were used to assess the volcanic histories of specific regions of interest. These interpreted histories were related to models of mare basalt petrogenesis in an attempt to better understand the evolution of lunar volcanism.

  20. An ensemble of dynamic neural network identifiers for fault detection and isolation of gas turbine engines.

    PubMed

    Amozegar, M; Khorasani, K

    2016-04-01

    In this paper, a new approach for Fault Detection and Isolation (FDI) of gas turbine engines is proposed by developing an ensemble of dynamic neural network identifiers. For health monitoring of the gas turbine engine, its dynamics is first identified by constructing three separate or individual dynamic neural network architectures. Specifically, a dynamic multi-layer perceptron (MLP), a dynamic radial-basis function (RBF) neural network, and a dynamic support vector machine (SVM) are trained to individually identify and represent the gas turbine engine dynamics. Next, three ensemble-based techniques are developed to represent the gas turbine engine dynamics, namely, two heterogeneous ensemble models and one homogeneous ensemble model. It is first shown that all ensemble approaches do significantly improve the overall performance and accuracy of the developed system identification scheme when compared to each of the stand-alone solutions. The best selected stand-alone model (i.e., the dynamic RBF network) and the best selected ensemble architecture (i.e., the heterogeneous ensemble) in terms of their performances in achieving an accurate system identification are then selected for solving the FDI task. The required residual signals are generated by using both a single model-based solution and an ensemble-based solution under various gas turbine engine health conditions. Our extensive simulation studies demonstrate that the fault detection and isolation task achieved by using the residuals that are obtained from the dynamic ensemble scheme results in a significantly more accurate and reliable performance as illustrated through detailed quantitative confusion matrix analysis and comparative studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Improving performance of breast cancer risk prediction using a new CAD-based region segmentation scheme

    NASA Astrophysics Data System (ADS)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Qiu, Yuchen; Zheng, Bin

    2018-02-01

    Objective of this study is to develop and test a new computer-aided detection (CAD) scheme with improved region of interest (ROI) segmentation combined with an image feature extraction framework to improve performance in predicting short-term breast cancer risk. A dataset involving 570 sets of "prior" negative mammography screening cases was retrospectively assembled. In the next sequential "current" screening, 285 cases were positive and 285 cases remained negative. A CAD scheme was applied to all 570 "prior" negative images to stratify cases into the high and low risk case group of having cancer detected in the "current" screening. First, a new ROI segmentation algorithm was used to automatically remove useless area of mammograms. Second, from the matched bilateral craniocaudal view images, a set of 43 image features related to frequency characteristics of ROIs were initially computed from the discrete cosine transform and spatial domain of the images. Third, a support vector machine model based machine learning classifier was used to optimally classify the selected optimal image features to build a CAD-based risk prediction model. The classifier was trained using a leave-one-case-out based cross-validation method. Applying this improved CAD scheme to the testing dataset, an area under ROC curve, AUC = 0.70+/-0.04, which was significantly higher than using the extracting features directly from the dataset without the improved ROI segmentation step (AUC = 0.63+/-0.04). This study demonstrated that the proposed approach could improve accuracy on predicting short-term breast cancer risk, which may play an important role in helping eventually establish an optimal personalized breast cancer paradigm.

  2. Multi-instance learning based on instance consistency for image retrieval

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Wu, Zhize; Wan, Shouhong; Yue, Lihua; Yin, Bangjie

    2017-07-01

    Multiple-instance learning (MIL) has been successfully utilized in image retrieval. Existing approaches cannot select positive instances correctly from positive bags which may result in a low accuracy. In this paper, we propose a new image retrieval approach called multiple instance learning based on instance-consistency (MILIC) to mitigate such issue. First, we select potential positive instances effectively in each positive bag by ranking instance-consistency (IC) values of instances. Then, we design a feature representation scheme, which can represent the relationship among bags and instances, based on potential positive instances to convert a bag into a single instance. Finally, we can use a standard single-instance learning strategy, such as the support vector machine, for performing object-based image retrieval. Experimental results on two challenging data sets show the effectiveness of our proposal in terms of accuracy and run time.

  3. DISSECT: a new mnemonic-based approach to the categorization of aortic dissection.

    PubMed

    Dake, M D; Thompson, M; van Sambeek, M; Vermassen, F; Morales, J P

    2013-08-01

    Classification systems for aortic dissection provide important guides to clinical decision-making, but the relevance of traditional categorization schemes is being questioned in an era when endovascular techniques are assuming a growing role in the management of this frequently complex and catastrophic entity. In recognition of the expanding range of interventional therapies now used as alternatives to conventional treatment approaches, the Working Group on Aortic Diseases of the DEFINE Project developed a categorization system that features the specific anatomic and clinical manifestations of the disease process that are most relevant to contemporary decision-making. The DISSECT classification system is a mnemonic-based approach to the evaluation of aortic dissection. It guides clinicians through an assessment of six critical characteristics that facilitate optimal communication of the most salient details that currently influence the selection of a therapeutic option, including those findings that are key when considering an endovascular procedure, but are not taken into account by the DeBakey or Stanford categorization schemes. The six features of aortic dissection include: duration of disease; intimal tear location; size of the dissected aorta; segmental extent of aortic involvement; clinical complications of the dissection, and thrombus within the aortic false lumen. In current clinical practice, endovascular therapy is increasingly considered as an alternative to medical management or open surgical repair in select cases of type B aortic dissection. Currently, endovascular aortic repair is not used for patients with type A aortic dissection, but catheter-based techniques directed at peripheral branch vessel ischemia that may complicate type A dissection are considered valuable adjunctive interventions, when indicated. The use of a new system for categorization of aortic dissection, DISSECT, addresses the shortcomings of well-known established schemes devised more than 40 years ago, before the introduction of endovascular techniques. It will serve as a guide to support a critical analysis of contemporary therapeutic options and inform management decisions based on specific features of the disease process. Copyright © 2013 European Society for Vascular Surgery. All rights reserved.

  4. Follicle Detection on the USG Images to Support Determination of Polycystic Ovary Syndrome

    NASA Astrophysics Data System (ADS)

    Adiwijaya; Purnama, B.; Hasyim, A.; Septiani, M. D.; Wisesty, U. N.; Astuti, W.

    2015-06-01

    Polycystic Ovary Syndrome(PCOS) is the most common endocrine disorders affected to female in their reproductive cycle. This has gained the attention from married couple which affected by infertility. One of the diagnostic criteria considereded by the doctor is analysing manually the ovary USG image to detect the number and size of ovary's follicle. This analysis may affect low varibilites, reproducibility, and efficiency. To overcome this problems. automatic scheme is suggested to detect the follicle on USG image in supporting PCOS diagnosis. The first scheme is determining the initial homogeneous region which will be segmented into real follicle form The next scheme is selecting the appropriate regions to follicle criteria. then measuring the segmented region attribute as the follicle. The measurement remains the number and size that aimed at categorizing the image into the PCOS or non-PCOS. The method used is region growing which includes region-based and seed-based. To measure the follicle diameter. there will be the different method including stereology and euclidean distance. The most optimum system plan to detect PCO is by using region growing and by using euclidean distance on quantification of follicle.

  5. Elderly demand for family-based care and support: evidence from a social intervention strategy.

    PubMed

    Aboagye, Emmanuel; Agyemang, Otuo Serebour; Tjerbo, Trond

    2013-12-06

    This paper examines the influence of the national health insurance scheme on elderly demand for family-based care and support. It contributes to the growing concern on the rapid increase in the elderly population globally using micro-level social theory to examine the influence the health insurance has on elderly demand for family support. A qualitative case study approach is applied to construct a comprehensive and thick description of how the national health insurance scheme influences the elderly in their demand for family support.Through focused interviews and direct observation of six selected cases, in-depth information on primary carers, living arrangement and the interaction between the health insurance as structure and elders as agents are analyzed. The study highlights that the interaction between the elderly and the national health insurance scheme has produced a new stratum of relationship between the elderly and their primary carers. Consequently, this has created equilibrium between the elderly demand for support and support made available by their primary carers. As the demand of the elderly for support is declining, supply of support by family members for the elderly is also on the decline.

  6. Elderly Demand for Family-based Care and Support: Evidence from a Social Intervention Strategy

    PubMed Central

    Aboagye, Emmanuel; Agyemang, Otuo Serebour; Tjerbo, Trond

    2014-01-01

    This paper examines the influence of the national health insurance scheme on elderly demand for family-based care and support. It contributes to the growing concern on the rapid increase in the elderly population globally using micro-level social theory to examine the influence the health insurance has on elderly demand for family support. A qualitative case study approach is applied to construct a comprehensive and thick description of how the national health insurance scheme influences the elderly in their demand for family support. Through focused interviews and direct observation of six selected cases, in-depth information on primary carers, living arrangement and the interaction between the health insurance as structure and elders as agents are analyzed. The study highlights that the interaction between the elderly and the national health insurance scheme has produced a new stratum of relationship between the elderly and their primary carers. Consequently, this has created equilibrium between the elderly demand for support and support made available by their primary carers. As the demand of the elderly for support is declining, supply of support by family members for the elderly is also on the decline. PMID:24576369

  7. Attentional Selection Can Be Predicted by Reinforcement Learning of Task-relevant Stimulus Features Weighted by Value-independent Stickiness.

    PubMed

    Balcarras, Matthew; Ardid, Salva; Kaping, Daniel; Everling, Stefan; Womelsdorf, Thilo

    2016-02-01

    Attention includes processes that evaluate stimuli relevance, select the most relevant stimulus against less relevant stimuli, and bias choice behavior toward the selected information. It is not clear how these processes interact. Here, we captured these processes in a reinforcement learning framework applied to a feature-based attention task that required macaques to learn and update the value of stimulus features while ignoring nonrelevant sensory features, locations, and action plans. We found that value-based reinforcement learning mechanisms could account for feature-based attentional selection and choice behavior but required a value-independent stickiness selection process to explain selection errors while at asymptotic behavior. By comparing different reinforcement learning schemes, we found that trial-by-trial selections were best predicted by a model that only represents expected values for the task-relevant feature dimension, with nonrelevant stimulus features and action plans having only a marginal influence on covert selections. These findings show that attentional control subprocesses can be described by (1) the reinforcement learning of feature values within a restricted feature space that excludes irrelevant feature dimensions, (2) a stochastic selection process on feature-specific value representations, and (3) value-independent stickiness toward previous feature selections akin to perseveration in the motor domain. We speculate that these three mechanisms are implemented by distinct but interacting brain circuits and that the proposed formal account of feature-based stimulus selection will be important to understand how attentional subprocesses are implemented in primate brain networks.

  8. Secure Multiuser Communications in Wireless Sensor Networks with TAS and Cooperative Jamming

    PubMed Central

    Yang, Maoqiang; Zhang, Bangning; Huang, Yuzhen; Yang, Nan; Guo, Daoxing; Gao, Bin

    2016-01-01

    In this paper, we investigate the secure transmission in wireless sensor networks (WSNs) consisting of one multiple-antenna base station (BS), multiple single-antenna legitimate users, one single-antenna eavesdropper and one multiple-antenna cooperative jammer. In an effort to reduce the scheduling complexity and extend the battery lifetime of the sensor nodes, the switch-and-stay combining (SSC) scheduling scheme is exploited over the sensor nodes. Meanwhile, transmit antenna selection (TAS) is employed at the BS and cooperative jamming (CJ) is adopted at the jammer node, aiming at achieving a satisfactory secrecy performance. Moreover, depending on whether the jammer node has the global channel state information (CSI) of both the legitimate channel and the eavesdropper’s channel, it explores a zero-forcing beamforming (ZFB) scheme or a null-space artificial noise (NAN) scheme to confound the eavesdropper while avoiding the interference to the legitimate user. Building on this, we propose two novel hybrid secure transmission schemes, termed TAS-SSC-ZFB and TAS-SSC-NAN, for WSNs. We then derive the exact closed-form expressions for the secrecy outage probability and the effective secrecy throughput of both schemes to characterize the secrecy performance. Using these closed-form expressions, we further determine the optimal switching threshold and obtain the optimal power allocation factor between the BS and jammer node for both schemes to minimize the secrecy outage probability, while the optimal secrecy rate is decided to maximize the effective secrecy throughput for both schemes. Numerical results are provided to verify the theoretical analysis and illustrate the impact of key system parameters on the secrecy performance. PMID:27845753

  9. Computational design of the basic dynamical processes of the UCLA general circulation model

    NASA Technical Reports Server (NTRS)

    Arakawa, A.; Lamb, V. R.

    1977-01-01

    The 12-layer UCLA general circulation model encompassing troposphere and stratosphere (and superjacent 'sponge layer') is described. Prognostic variables are: surface pressure, horizontal velocity, temperature, water vapor and ozone in each layer, planetary boundary layer (PBL) depth, temperature, moisture and momentum discontinuities at PBL top, ground temperature and water storage, and mass of snow on ground. Selection of space finite-difference schemes for homogeneous incompressible flow, with/without a free surface, nonlinear two-dimensional nondivergent flow, enstrophy conserving schemes, momentum advection schemes, vertical and horizontal difference schemes, and time differencing schemes are discussed.

  10. A High-Resolution Capability for Large-Eddy Simulation of Jet Flows

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2011-01-01

    A large-eddy simulation (LES) code that utilizes high-resolution numerical schemes is described and applied to a compressible jet flow. The code is written in a general manner such that the accuracy/resolution of the simulation can be selected by the user. Time discretization is performed using a family of low-dispersion Runge-Kutta schemes, selectable from first- to fourth-order. Spatial discretization is performed using central differencing schemes. Both standard schemes, second- to twelfth-order (3 to 13 point stencils) and Dispersion Relation Preserving schemes from 7 to 13 point stencils are available. The code is written in Fortran 90 and uses hybrid MPI/OpenMP parallelization. The code is applied to the simulation of a Mach 0.9 jet flow. Four-stage third-order Runge-Kutta time stepping and the 13 point DRP spatial discretization scheme of Bogey and Bailly are used. The high resolution numerics used allows for the use of relatively sparse grids. Three levels of grid resolution are examined, 3.5, 6.5, and 9.2 million points. Mean flow, first-order turbulent statistics and turbulent spectra are reported. Good agreement with experimental data for mean flow and first-order turbulent statistics is shown.

  11. MRI-based treatment planning with pseudo CT generated through atlas registration.

    PubMed

    Uh, Jinsoo; Merchant, Thomas E; Li, Yimei; Li, Xingyu; Hua, Chiaho

    2014-05-01

    To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787-0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%-98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.

  12. MRI-based treatment planning with pseudo CT generated through atlas registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uh, Jinsoo, E-mail: jinsoo.uh@stjude.org; Merchant, Thomas E.; Hua, Chiaho

    2014-05-15

    Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration ofmore » conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.« less

  13. MRI-based treatment planning with pseudo CT generated through atlas registration

    PubMed Central

    Uh, Jinsoo; Merchant, Thomas E.; Li, Yimei; Li, Xingyu; Hua, Chiaho

    2014-01-01

    Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs. PMID:24784377

  14. Multi-atlas segmentation of subcortical brain structures via the AutoSeg software pipeline

    PubMed Central

    Wang, Jiahui; Vachet, Clement; Rumple, Ashley; Gouttard, Sylvain; Ouziel, Clémentine; Perrot, Emilie; Du, Guangwei; Huang, Xuemei; Gerig, Guido; Styner, Martin

    2014-01-01

    Automated segmenting and labeling of individual brain anatomical regions, in MRI are challenging, due to the issue of individual structural variability. Although atlas-based segmentation has shown its potential for both tissue and structure segmentation, due to the inherent natural variability as well as disease-related changes in MR appearance, a single atlas image is often inappropriate to represent the full population of datasets processed in a given neuroimaging study. As an alternative for the case of single atlas segmentation, the use of multiple atlases alongside label fusion techniques has been introduced using a set of individual “atlases” that encompasses the expected variability in the studied population. In our study, we proposed a multi-atlas segmentation scheme with a novel graph-based atlas selection technique. We first paired and co-registered all atlases and the subject MR scans. A directed graph with edge weights based on intensity and shape similarity between all MR scans is then computed. The set of neighboring templates is selected via clustering of the graph. Finally, weighted majority voting is employed to create the final segmentation over the selected atlases. This multi-atlas segmentation scheme is used to extend a single-atlas-based segmentation toolkit entitled AutoSeg, which is an open-source, extensible C++ based software pipeline employing BatchMake for its pipeline scripting, developed at the Neuro Image Research and Analysis Laboratories of the University of North Carolina at Chapel Hill. AutoSeg performs N4 intensity inhomogeneity correction, rigid registration to a common template space, automated brain tissue classification based skull-stripping, and the multi-atlas segmentation. The multi-atlas-based AutoSeg has been evaluated on subcortical structure segmentation with a testing dataset of 20 adult brain MRI scans and 15 atlas MRI scans. The AutoSeg achieved mean Dice coefficients of 81.73% for the subcortical structures. PMID:24567717

  15. Implementing forward recovery using checkpointing in distributed systems

    NASA Technical Reports Server (NTRS)

    Long, Junsheng; Fuchs, W. K.; Abraham, Jacob A.

    1991-01-01

    The paper describes the implementation of a forward recovery scheme using checkpoints and replicated tasks. The implementation is based on the concept of lookahead execution and rollback validation. In the experiment, two tasks are selected for the normal execution and one for rollback validation. It is shown that the recovery strategy has nearly error-free execution time and an average redundancy lower than TMR.

  16. Direct Job Creation in the Public Sector. Evaluation of National Experience in Canada, Denmark, Norway, United Kingdom, United States.

    ERIC Educational Resources Information Center

    Organisation for Economic Cooperation and Development, Paris (France).

    This report examines selected public sector direct job creation schemes that were in operation in 1977-1978 in Canada, Denmark, Norway, the United Kingdom, and the United States. Based on responses to a questionnaire and discussions with officials in the five countries, the information presented in the report is not intended to evaluate any one…

  17. Comparison of the co-gasification of sewage sludge and food wastes and cost-benefit analysis of gasification- and incineration-based waste treatment schemes.

    PubMed

    You, Siming; Wang, Wei; Dai, Yanjun; Tong, Yen Wah; Wang, Chi-Hwa

    2016-10-01

    The compositions of food wastes and their co-gasification producer gas were compared with the existing data of sewage sludge. Results showed that food wastes are more favorable than sewage sludge for co-gasification based on residue generation and energy output. Two decentralized gasification-based schemes were proposed to dispose of the sewage sludge and food wastes in Singapore. Monte Carlo simulation-based cost-benefit analysis was conducted to compare the proposed schemes with the existing incineration-based scheme. It was found that the gasification-based schemes are financially superior to the incineration-based scheme based on the data of net present value (NPV), benefit-cost ratio (BCR), and internal rate of return (IRR). Sensitivity analysis was conducted to suggest effective measures to improve the economics of the schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Research on filter’s parameter selection based on PROMETHEE method

    NASA Astrophysics Data System (ADS)

    Zhu, Hui-min; Wang, Hang-yu; Sun, Shi-yan

    2018-03-01

    The selection of filter’s parameters in target recognition was studied in this paper. The PROMETHEE method was applied to the optimization problem of Gabor filter parameters decision, the correspondence model of the elemental relation between two methods was established. The author took the identification of military target as an example, problem about the filter’s parameter decision was simulated and calculated by PROMETHEE. The result showed that using PROMETHEE method for the selection of filter’s parameters was more scientific. The human disturbance caused by the experts method and empirical method could be avoided by this way. The method can provide reference for the parameter configuration scheme decision of the filter.

  19. Optical stealth transmission based on super-continuum generation in highly nonlinear fiber over WDM network.

    PubMed

    Zhu, Huatao; Wang, Rong; Pu, Tao; Fang, Tao; Xiang, Peng; Zheng, Jilin; Chen, Dalei

    2015-06-01

    In this Letter, the optical stealth transmission carried by super-continuum spectrum optical pulses generated in highly nonlinear fiber is proposed and experimentally demonstrated. In the proposed transmission scheme, super-continuum signals are reshaped in the spectral domain through a wavelength-selective switch and are temporally spread by a chromatic dispersion device to achieve the same noise-like characteristic as the noise in optical networks, so that in both the time domain and the spectral domain, the stealth signals are hidden in public channel. Our experimental results show that compared with existing schemes where stealth channels are carried by amplified spontaneous emission noise, super-continuum signal can increase the transmission performance and robustness.

  20. Experimental demonstration of selective quantum process tomography on an NMR quantum information processor

    NASA Astrophysics Data System (ADS)

    Gaikwad, Akshay; Rehal, Diksha; Singh, Amandeep; Arvind, Dorai, Kavita

    2018-02-01

    We present the NMR implementation of a scheme for selective and efficient quantum process tomography without ancilla. We generalize this scheme such that it can be implemented efficiently using only a set of measurements involving product operators. The method allows us to estimate any element of the quantum process matrix to a desired precision, provided a set of quantum states can be prepared efficiently. Our modified technique requires fewer experimental resources as compared to the standard implementation of selective and efficient quantum process tomography, as it exploits the special nature of NMR measurements to allow us to compute specific elements of the process matrix by a restrictive set of subsystem measurements. To demonstrate the efficacy of our scheme, we experimentally tomograph the processes corresponding to "no operation," a controlled-NOT (CNOT), and a controlled-Hadamard gate on a two-qubit NMR quantum information processor, with high fidelities.

  1. Can health insurance improve access to quality care for the Indian poor?

    PubMed

    Michielsen, Joris; Criel, Bart; Devadasan, Narayanan; Soors, Werner; Wouters, Edwin; Meulemans, Herman

    2011-08-01

    Recently, the Indian government launched health insurance schemes for the poor both to protect them from high health spending and to improve access to high-quality health services. This article aims to review the potentials of health insurance interventions in order to improve access to quality care in India based on experiences of community health insurance schemes. PubMed, Ovid MEDLINE (R), All EBM Reviews, CSA Sociological Abstracts, CSA Social Service Abstracts, EconLit, Science Direct, the ISI Web of Knowledge, Social Science Research Network and databases of research centers were searched up to September 2010. An Internet search was executed. One thousand hundred and thirty-three papers were assessed for inclusion and exclusion criteria. Twenty-five papers were selected providing information on eight schemes. A realist review was performed using Hirschman's exit-voice theory: mechanisms to improve exit strategies (financial assets and infrastructure) and strengthen patient's long voice route (quality management) and short voice route (patient pressure). All schemes use a mix of measures to improve exit strategies and the long voice route. Most mechanisms are not effective in reality. Schemes that focus on the patients' bargaining position at the patient-provider interface seem to improve access to quality care. Top-down health insurance interventions with focus on exit strategies will not work out fully in the Indian context. Government must actively facilitate the potential of CHI schemes to emancipate the target group so that they may transform from mere passive beneficiaries into active participants in their health.

  2. A single-stage flux-corrected transport algorithm for high-order finite-volume methods

    DOE PAGES

    Chaplin, Christopher; Colella, Phillip

    2017-05-08

    We present a new limiter method for solving the advection equation using a high-order, finite-volume discretization. The limiter is based on the flux-corrected transport algorithm. Here, we modify the classical algorithm by introducing a new computation for solution bounds at smooth extrema, as well as improving the preconstraint on the high-order fluxes. We compute the high-order fluxes via a method-of-lines approach with fourth-order Runge-Kutta as the time integrator. For computing low-order fluxes, we select the corner-transport upwind method due to its improved stability over donor-cell upwind. Several spatial differencing schemes are investigated for the high-order flux computation, including centered- differencemore » and upwind schemes. We show that the upwind schemes perform well on account of the dissipation of high-wavenumber components. The new limiter method retains high-order accuracy for smooth solutions and accurately captures fronts in discontinuous solutions. Further, we need only apply the limiter once per complete time step.« less

  3. Setting up the criteria and credit-awarding scheme for building interior material selection to achieve better indoor air quality.

    PubMed

    Niu, J L; Burnett, J

    2001-06-01

    Methods, standards, and regulations that are aimed to reduce indoor air pollution from building materials are critically reviewed. These are classified as content control and emission control. Methods and standards can be found in both of these two classes. In the regulation domain, only content control is enforced in some countries and some regions, and asbestos is the only building material that is banned for building use. The controlled pollutants include heavy metals, radon, formaldehyde, and volatile organic compounds (VOCs). Emission rate control based upon environment chamber testing is very much in the nature of voluntary product labeling and ranking, and this mainly targets formaldehyde and VOC emissions. It is suggested that radon emission from building materials should be subject to similar emission rate control. A comprehensive set criteria and credit-awarding scheme that encourages the use of low-emission building material is synthesized, and how this scheme can be practiced in building design is proposed and discussed.

  4. Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment

    NASA Astrophysics Data System (ADS)

    Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin

    2017-10-01

    Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.

  5. Free energy computations by minimization of Kullback-Leibler divergence: An efficient adaptive biasing potential method for sparse representations

    NASA Astrophysics Data System (ADS)

    Bilionis, I.; Koutsourelakis, P. S.

    2012-05-01

    The present paper proposes an adaptive biasing potential technique for the computation of free energy landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy function, under the same objective of minimizing the Kullback-Leibler divergence between appropriately selected densities. It offers rigorous convergence diagnostics even though history dependent, non-Markovian dynamics are employed. It makes use of a greedy optimization scheme in order to obtain sparse representations of the free energy function which can be particularly useful in multidimensional cases. It employs embarrassingly parallelizable sampling schemes that are based on adaptive Sequential Monte Carlo and can be readily coupled with legacy molecular dynamics simulators. The sequential nature of the learning and sampling scheme enables the efficient calculation of free energy functions parametrized by the temperature. The characteristics and capabilities of the proposed method are demonstrated in three numerical examples.

  6. A new processing scheme for ultra-high resolution direct infusion mass spectrometry data

    NASA Astrophysics Data System (ADS)

    Zielinski, Arthur T.; Kourtchev, Ivan; Bortolini, Claudio; Fuller, Stephen J.; Giorio, Chiara; Popoola, Olalekan A. M.; Bogialli, Sara; Tapparo, Andrea; Jones, Roderic L.; Kalberer, Markus

    2018-04-01

    High resolution, high accuracy mass spectrometry is widely used to characterise environmental or biological samples with highly complex composition enabling the identification of chemical composition of often unknown compounds. Despite instrumental advancements, the accurate molecular assignment of compounds acquired in high resolution mass spectra remains time consuming and requires automated algorithms, especially for samples covering a wide mass range and large numbers of compounds. A new processing scheme is introduced implementing filtering methods based on element assignment, instrumental error, and blank subtraction. Optional post-processing incorporates common ion selection across replicate measurements and shoulder ion removal. The scheme allows both positive and negative direct infusion electrospray ionisation (ESI) and atmospheric pressure photoionisation (APPI) acquisition with the same programs. An example application to atmospheric organic aerosol samples using an Orbitrap mass spectrometer is reported for both ionisation techniques resulting in final spectra with 0.8% and 8.4% of the peaks retained from the raw spectra for APPI positive and ESI negative acquisition, respectively.

  7. A new local-global approach for classification.

    PubMed

    Peres, R T; Pedreira, C E

    2010-09-01

    In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.

  8. Development of Energy Efficient Clustering Protocol in Wireless Sensor Network Using Neuro-Fuzzy Approach.

    PubMed

    Julie, E Golden; Selvi, S Tamil

    2016-01-01

    Wireless sensor networks (WSNs) consist of sensor nodes with limited processing capability and limited nonrechargeable battery power. Energy consumption in WSN is a significant issue in networks for improving network lifetime. It is essential to develop an energy aware clustering protocol in WSN to reduce energy consumption for increasing network lifetime. In this paper, a neuro-fuzzy energy aware clustering scheme (NFEACS) is proposed to form optimum and energy aware clusters. NFEACS consists of two parts: fuzzy subsystem and neural network system that achieved energy efficiency in forming clusters and cluster heads in WSN. NFEACS used neural network that provides effective training set related to energy and received signal strength of all nodes to estimate the expected energy for tentative cluster heads. Sensor nodes with higher energy are trained with center location of base station to select energy aware cluster heads. Fuzzy rule is used in fuzzy logic part that inputs to form clusters. NFEACS is designed for WSN handling mobility of node. The proposed scheme NFEACS is compared with related clustering schemes, cluster-head election mechanism using fuzzy logic, and energy aware fuzzy unequal clustering. The experiment results show that NFEACS performs better than the other related schemes.

  9. Development of Energy Efficient Clustering Protocol in Wireless Sensor Network Using Neuro-Fuzzy Approach

    PubMed Central

    Julie, E. Golden; Selvi, S. Tamil

    2016-01-01

    Wireless sensor networks (WSNs) consist of sensor nodes with limited processing capability and limited nonrechargeable battery power. Energy consumption in WSN is a significant issue in networks for improving network lifetime. It is essential to develop an energy aware clustering protocol in WSN to reduce energy consumption for increasing network lifetime. In this paper, a neuro-fuzzy energy aware clustering scheme (NFEACS) is proposed to form optimum and energy aware clusters. NFEACS consists of two parts: fuzzy subsystem and neural network system that achieved energy efficiency in forming clusters and cluster heads in WSN. NFEACS used neural network that provides effective training set related to energy and received signal strength of all nodes to estimate the expected energy for tentative cluster heads. Sensor nodes with higher energy are trained with center location of base station to select energy aware cluster heads. Fuzzy rule is used in fuzzy logic part that inputs to form clusters. NFEACS is designed for WSN handling mobility of node. The proposed scheme NFEACS is compared with related clustering schemes, cluster-head election mechanism using fuzzy logic, and energy aware fuzzy unequal clustering. The experiment results show that NFEACS performs better than the other related schemes. PMID:26881269

  10. Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, Bjoern; Yee, Helen C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great challenge to algorithm development. In addition, controlling the numerical error of the divergence free condition of the magnetic fields for high order methods has been a stumbling block. Lower order methods are not practical for the astrophysical problems in question. We propose to extend our hydrodynamics schemes to the MHD equations with several desired properties over commonly used MHD schemes.

  11. PMD compensation in multilevel coded-modulation schemes with coherent detection using BLAST algorithm and iterative polarization cancellation.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-15

    We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.

  12. Tumor or abnormality identification from magnetic resonance images using statistical region fusion based segmentation.

    PubMed

    Subudhi, Badri Narayan; Thangaraj, Veerakumar; Sankaralingam, Esakkirajan; Ghosh, Ashish

    2016-11-01

    In this article, a statistical fusion based segmentation technique is proposed to identify different abnormality in magnetic resonance images (MRI). The proposed scheme follows seed selection, region growing-merging and fusion of multiple image segments. In this process initially, an image is divided into a number of blocks and for each block we compute the phase component of the Fourier transform. The phase component of each block reflects the gray level variation among the block but contains a large correlation among them. Hence a singular value decomposition (SVD) technique is adhered to generate a singular value of each block. Then a thresholding procedure is applied on these singular values to identify edgy and smooth regions and some seed points are selected for segmentation. By considering each seed point we perform a binary segmentation of the complete MRI and hence with all seed points we get an equal number of binary images. A parcel based statistical fusion process is used to fuse all the binary images into multiple segments. Effectiveness of the proposed scheme is tested on identifying different abnormalities: prostatic carcinoma detection, tuberculous granulomas identification and intracranial neoplasm or brain tumor detection. The proposed technique is established by comparing its results against seven state-of-the-art techniques with six performance evaluation measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Optimization of Decision-Making for Spatial Sampling in the North China Plain, Based on Remote-Sensing a Priori Knowledge

    NASA Astrophysics Data System (ADS)

    Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.

    2012-07-01

    In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.

  14. Data acquisition and path selection decision making for an autonomous roving vehicle

    NASA Technical Reports Server (NTRS)

    Frederick, D. K.; Shen, C. N.; Yerazunis, S. W.

    1976-01-01

    Problems related to the guidance of an autonomous rover for unmanned planetary exploration were investigated. Topics included in these studies were: simulation on an interactive graphics computer system of the Rapid Estimation Technique for detection of discrete obstacles; incorporation of a simultaneous Bayesian estimate of states and inputs in the Rapid Estimation Scheme; development of methods for estimating actual laser rangefinder errors and their application to date provided by Jet Propulsion Laboratory; and modification of a path selection system simulation computer code for evaluation of a hazard detection system based on laser rangefinder data.

  15. Selective laser ionisation of radionuclide 63Ni

    NASA Astrophysics Data System (ADS)

    Tsvetkov, G. O.; D'yachkov, A. B.; Gorkunov, A. A.; Labozin, A. V.; Mironov, S. M.; Firsov, V. A.; Panchenko, V. Ya.

    2017-02-01

    We report a search for a scheme of selective laser stepwise ionisation of radionuclide 63Ni by radiation of a dye laser pumped by a copper vapour laser. A three-stage scheme is found with ionisation through an autoionising state (AIS): 3d 84s2 3F4(E = 0) → 3d 94p 1Fo3(31030.99 cm-1) → 3d 94d 2[7/2]4(49322.56 cm-1) → AIS(67707.61 cm-1) which, by employing saturated radiation intensities provides the ionisation selectivity of above 1200 for 63Ni.

  16. Report on Pairing-based Cryptography.

    PubMed

    Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily

    2015-01-01

    This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST's position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed.

  17. Report on Pairing-based Cryptography

    PubMed Central

    Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily

    2015-01-01

    This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST’s position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed. PMID:26958435

  18. A copyright protection scheme for digital images based on shuffled singular value decomposition and visual cryptography.

    PubMed

    Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta

    2016-01-01

    This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.

  19. Improving the performance of minimizers and winnowing schemes

    PubMed Central

    Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl

    2017-01-01

    Abstract Motivation: The minimizers scheme is a method for selecting k-mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k-mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k-mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. Results: We provide an in-depth analysis of the effect of k-mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al.) on the expected density of minimizers in a random sequence. Availability and Implementation: The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git. Contact: gmarcais@cs.cmu.edu or carlk@cs.cmu.edu PMID:28881970

  20. Isotope-selective sensor for medical diagnostics based on PAS

    NASA Astrophysics Data System (ADS)

    Wolff, M.; Groninga, H. G.; Harde, H.

    2005-06-01

    Development of new optical sensor technologies has a major impact on the progression of diagnostic methods. Of the permanently increasing number of non-invasive 13C-breath tests, the Urea Breath Test for detection of Helicobacter pylori is the most prominent. However, many recent developments go beyond gastroenterological applications. We present a new detection scheme for breath analysis that employs an especially compact and simple set-up based on Photoacoustic Spectroscopy. Using a wavelength-modulated DFB-diode laser and taking advantage of acoustical resonances of the sample cell, we performed very sensitive isotope-selective measurements on CO2. Detection limits for 13CO2 of a few ppm and for the variation of the 13CO2 concentration of approximately 1% were achieved.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olivares, Stefano

    We investigate the performance of a selective cloning machine based on linear optical elements and Gaussian measurements, which allows one to clone at will one of the two incoming input states. This machine is a complete generalization of a 1{yields}2 cloning scheme demonstrated by Andersen et al. [Phys. Rev. Lett. 94, 240503 (2005)]. The input-output fidelity is studied for a generic Gaussian input state, and the effect of nonunit quantum efficiency is also taken into account. We show that, if the states to be cloned are squeezed states with known squeezing parameter, then the fidelity can be enhanced using amore » third suitable squeezed state during the final stage of the cloning process. A binary communication protocol based on the selective cloning machine is also discussed.« less

  2. Structure of Z-scheme CdS/CQDs/BiOCl heterojunction with enhanced photocatalytic activity for environmental pollutant elimination

    NASA Astrophysics Data System (ADS)

    Pan, Jinbo; Liu, Jianjun; Zuo, Shengli; Khan, Usman Ali; Yu, Yingchun; Li, Baoshan

    2018-06-01

    Z-scheme CdS/CQDs/BiOCl heterojunction was synthesized by a facile region-selective deposition process. Owing to the electronegativity of the groups on the surface of Carbon Quantum Dots (CQDs), they can be sandwiched between CdS and BiOCl, based on the stepwise region-selective deposition process. The samples were systematically characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), high resolution TEM (HRTEM), X-ray photoelectron spectroscopy (XPS), UV-vis diffuse reflectance spectroscopy (UV-vis DRS), photoelectrochemical measurements and photoluminescence (PL). The results indicate that CQDs with size of 2-5 nm and CdS nanoparticles with size of 5-10 nm dispersed uniformly on the surface of cuboid BiOCl nanosheets. The photocatalytic performance tests reveal that the CdS/CQDs/BiOCl heterojunction exhibits much higher photocatalytic activity than that of BiOCl, CdS/BiOCl and CQDs/BiOCl for Rhodamine B (RhB) and phenol photodegradation under visible and UV light illumination, respectively. The enhanced photocatalytic performance should be attributed to the Z-scheme structure of CdS/CQDs/BiOCl, which not only improves visible light absorption and the migration efficiency of the photogenerated electron-holes but also keeps high redox ability of CdS/CQDs/BiOCl composite.

  3. Capturing heterogeneity: The role of a study area's extent for estimating mean throughfall

    NASA Astrophysics Data System (ADS)

    Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate

    2016-11-01

    The selection of an appropriate spatial extent of a sampling plot is one among several important decisions involved in planning a throughfall sampling scheme. In fact, the choice of the extent may determine whether or not a study can adequately characterize the hydrological fluxes of the studied ecosystem. Previous attempts to optimize throughfall sampling schemes focused on the selection of an appropriate sample size, support, and sampling design, while comparatively little attention has been given to the role of the extent. In this contribution, we investigated the influence of the extent on the representativeness of mean throughfall estimates for three forest ecosystems of varying stand structure. Our study is based on virtual sampling of simulated throughfall fields. We derived these fields from throughfall data sampled in a simply structured forest (young tropical forest) and two heterogeneous forests (old tropical forest, unmanaged mixed European beech forest). We then sampled the simulated throughfall fields with three common extents and various sample sizes for a range of events and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the complexity of the system under study and to the required temporal resolution of the throughfall data (i.e. event-based versus accumulated). Generally, event-based sampling in complex structured forests (conditions that favor comparatively long autocorrelations in throughfall) requires the largest extents. For event-based sampling, the choice of an appropriate extent can be as important as using an adequate sample size.

  4. Neural mechanisms of selective attention in the somatosensory system.

    PubMed

    Gomez-Ramirez, Manuel; Hysaj, Kristjana; Niebur, Ernst

    2016-09-01

    Selective attention allows organisms to extract behaviorally relevant information while ignoring distracting stimuli that compete for the limited resources of their central nervous systems. Attention is highly flexible, and it can be harnessed to select information based on sensory modality, within-modality feature(s), spatial location, object identity, and/or temporal properties. In this review, we discuss the body of work devoted to understanding mechanisms of selective attention in the somatosensory system. In particular, we describe the effects of attention on tactile behavior and corresponding neural activity in somatosensory cortex. Our focus is on neural mechanisms that select tactile stimuli based on their location on the body (somatotopic-based attention) or their sensory feature (feature-based attention). We highlight parallels between selection mechanisms in touch and other sensory systems and discuss several putative neural coding schemes employed by cortical populations to signal the behavioral relevance of sensory inputs. Specifically, we contrast the advantages and disadvantages of using a gain vs. spike-spike correlation code for representing attended sensory stimuli. We favor a neural network model of tactile attention that is composed of frontal, parietal, and subcortical areas that controls somatosensory cells encoding the relevant stimulus features to enable preferential processing throughout the somatosensory hierarchy. Our review is based on data from noninvasive electrophysiological and imaging data in humans as well as single-unit recordings in nonhuman primates. Copyright © 2016 the American Physiological Society.

  5. Neural mechanisms of selective attention in the somatosensory system

    PubMed Central

    Hysaj, Kristjana; Niebur, Ernst

    2016-01-01

    Selective attention allows organisms to extract behaviorally relevant information while ignoring distracting stimuli that compete for the limited resources of their central nervous systems. Attention is highly flexible, and it can be harnessed to select information based on sensory modality, within-modality feature(s), spatial location, object identity, and/or temporal properties. In this review, we discuss the body of work devoted to understanding mechanisms of selective attention in the somatosensory system. In particular, we describe the effects of attention on tactile behavior and corresponding neural activity in somatosensory cortex. Our focus is on neural mechanisms that select tactile stimuli based on their location on the body (somatotopic-based attention) or their sensory feature (feature-based attention). We highlight parallels between selection mechanisms in touch and other sensory systems and discuss several putative neural coding schemes employed by cortical populations to signal the behavioral relevance of sensory inputs. Specifically, we contrast the advantages and disadvantages of using a gain vs. spike-spike correlation code for representing attended sensory stimuli. We favor a neural network model of tactile attention that is composed of frontal, parietal, and subcortical areas that controls somatosensory cells encoding the relevant stimulus features to enable preferential processing throughout the somatosensory hierarchy. Our review is based on data from noninvasive electrophysiological and imaging data in humans as well as single-unit recordings in nonhuman primates. PMID:27334956

  6. A suggested color scheme for reducing perception-related accidents on construction work sites.

    PubMed

    Yi, June-seong; Kim, Yong-woo; Kim, Ki-aeng; Koo, Bonsang

    2012-09-01

    Changes in workforce demographics have led to the need for more sophisticated approaches to addressing the safety requirements of the construction industry. Despite extensive research in other industry domains, the construction industry has been passive in exploring the impact of a color scheme; perception-related accidents have been effectively diminished by its implementation. The research demonstrated that the use of appropriate color schemes could improve the actions and psychology of workers on site, thereby increasing their perceptions of potentially dangerous situations. As a preliminary study, the objects selected by rigorous analysis on accident reports were workwear, safety net, gondola, scaffolding, and safety passage. The colors modified on site for temporary facilities were adopted from existing theoretical and empirical research that suggests the use of certain colors and their combinations to improve visibility and conspicuity while minimizing work fatigue. The color schemes were also tested and confirmed through two workshops with workers and managers currently involved in actual projects. The impacts of color schemes suggested in this paper are summarized as follows. First, the color schemes improve the conspicuity of facilities with other on site components, enabling workers to quickly discern and orient themselves in their work environment. Secondly, the color schemes have been selected to minimize the visual work fatigue and monotony that can potentially increase accidents. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Status of NC Primer Demonstration & Transition

    DTIC Science & Technology

    2014-11-20

    Camo Paint Scheme H-53 • Six a/c selected for demonstration of Hentzen 17176KEP FRCE • Full non-chromate coating stack-up demo – Hentzen...CCC • BUNO #: #163076 #163080 #164859 11 NC Primer Demos: Camo Paint Scheme F/A-18A-D • 13 a/c selected for demonstration of PPG-Deft 02-GN... Strippability 5. Dry Time (-23377) 6. Fluid Resistance (Skydrol) 7. Solvent Resistance 8. Thickness Tolerance 9. Application Method 10. Packaging (1K

  8. A secure smart-card based authentication and key agreement scheme for telecare medicine information systems.

    PubMed

    Lee, Tian-Fu; Liu, Chuan-Ming

    2013-06-01

    A smart-card based authentication scheme for telecare medicine information systems enables patients, doctors, nurses, health visitors and the medicine information systems to establish a secure communication platform through public networks. Zhu recently presented an improved authentication scheme in order to solve the weakness of the authentication scheme of Wei et al., where the off-line password guessing attacks cannot be resisted. This investigation indicates that the improved scheme of Zhu has some faults such that the authentication scheme cannot execute correctly and is vulnerable to the attack of parallel sessions. Additionally, an enhanced authentication scheme based on the scheme of Zhu is proposed. The enhanced scheme not only avoids the weakness in the original scheme, but also provides users' anonymity and authenticated key agreements for secure data communications.

  9. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme.

    PubMed

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.

  10. Rates of inbreeding and genetic adaptation for populations managed as herds in zoos with a rotational mating system or with optimized contribution of parents.

    PubMed

    Mucha, S; Komen, H

    2016-08-01

    This study compares two genetic management scenarios for species kept in herds, such as deer. The simulations were designed so that their results can be extended to a wide range of zoo populations. In the first scenario, the simulated populations of size 3 × 20, 6 × 40 or 20 × 60 (herds × animals in herd) were managed with a rotational mating (RM) scheme in which 10%, 20% or 50% of males were selected for breeding and moved between herds in a circular fashion. The second scenario was based on optimal contribution theory (OC). OC requires an accurate pedigree to calculate kinship; males were selected and assigned numbers of offspring to minimize kinship in the next generation. RM was efficient in restriction of inbreeding and produced results comparable with OC. However, RM can result in genetic adaptation of the population to the zoo environment, in particular when 20% or less males are selected for rotation and selection of animals is not random. Lowest rates of inbreeding were obtained by combining OC with rotation of males as in the RM scheme. RM is easy to implement in practice and does not require pedigree data. When full pedigree is available, OC management is preferable. © 2015 Blackwell Verlag GmbH.

  11. ID-based encryption scheme with revocation

    NASA Astrophysics Data System (ADS)

    Othman, Hafizul Azrie; Ismail, Eddie Shahril

    2017-04-01

    In 2015, Meshram proposed an efficient ID-based cryptographic encryption based on the difficulty of solving discrete logarithm and integer-factoring problems. The scheme was pairing free and claimed to be secure against adaptive chosen plaintext attacks (CPA). Later, Tan et al. proved that the scheme was insecure by presenting a method to recover the secret master key and to obtain prime factorization of modulo n. In this paper, we propose a new pairing-free ID-based encryption scheme with revocation based on Meshram's ID-based encryption scheme, which is also secure against Tan et al.'s attacks.

  12. A secure biometrics-based authentication scheme for telecare medicine information systems.

    PubMed

    Yan, Xiaopeng; Li, Weiheng; Li, Ping; Wang, Jiantao; Hao, Xinhong; Gong, Peng

    2013-10-01

    The telecare medicine information system (TMIS) allows patients and doctors to access medical services or medical information at remote sites. Therefore, it could bring us very big convenient. To safeguard patients' privacy, authentication schemes for the TMIS attracted wide attention. Recently, Tan proposed an efficient biometrics-based authentication scheme for the TMIS and claimed their scheme could withstand various attacks. However, in this paper, we point out that Tan's scheme is vulnerable to the Denial-of-Service attack. To enhance security, we also propose an improved scheme based on Tan's work. Security and performance analysis shows our scheme not only could overcome weakness in Tan's scheme but also has better performance.

  13. High-order central ENO finite-volume scheme for hyperbolic conservation laws on three-dimensional cubed-sphere grids

    NASA Astrophysics Data System (ADS)

    Ivan, L.; De Sterck, H.; Susanto, A.; Groth, C. P. T.

    2015-02-01

    A fourth-order accurate finite-volume scheme for hyperbolic conservation laws on three-dimensional (3D) cubed-sphere grids is described. The approach is based on a central essentially non-oscillatory (CENO) finite-volume method that was recently introduced for two-dimensional compressible flows and is extended to 3D geometries with structured hexahedral grids. Cubed-sphere grids feature hexahedral cells with nonplanar cell surfaces, which are handled with high-order accuracy using trilinear geometry representations in the proposed approach. Varying stencil sizes and slope discontinuities in grid lines occur at the boundaries and corners of the six sectors of the cubed-sphere grid where the grid topology is unstructured, and these difficulties are handled naturally with high-order accuracy by the multidimensional least-squares based 3D CENO reconstruction with overdetermined stencils. A rotation-based mechanism is introduced to automatically select appropriate smaller stencils at degenerate block boundaries, where fewer ghost cells are available and the grid topology changes, requiring stencils to be modified. Combining these building blocks results in a finite-volume discretization for conservation laws on 3D cubed-sphere grids that is uniformly high-order accurate in all three grid directions. While solution-adaptivity is natural in the multi-block setting of our code, high-order accurate adaptive refinement on cubed-sphere grids is not pursued in this paper. The 3D CENO scheme is an accurate and robust solution method for hyperbolic conservation laws on general hexahedral grids that is attractive because it is inherently multidimensional by employing a K-exact overdetermined reconstruction scheme, and it avoids the complexity of considering multiple non-central stencil configurations that characterizes traditional ENO schemes. Extensive numerical tests demonstrate fourth-order convergence for stationary and time-dependent Euler and magnetohydrodynamic flows on cubed-sphere grids, and robustness against spurious oscillations at 3D shocks. Performance tests illustrate efficiency gains that can be potentially achieved using fourth-order schemes as compared to second-order methods for the same error level. Applications on extended cubed-sphere grids incorporating a seventh root block that discretizes the interior of the inner sphere demonstrate the versatility of the spatial discretization method.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, Larry K.; Gustafson, William I.; Kassianov, Evgueni I.

    A new treatment for shallow clouds has been introduced into the Weather Research and Forecasting (WRF) model. The new scheme, called the cumulus potential (CuP) scheme, replaces the ad-hoc trigger function used in the Kain-Fritsch cumulus parameterization with a trigger function related to the distribution of temperature and humidity in the convective boundary layer via probability density functions (PDFs). An additional modification to the default version of WRF is the computation of a cumulus cloud fraction based on the time scales relevant for shallow cumuli. Results from three case studies over the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM)more » site in north central Oklahoma are presented. These days were selected because of the presence of shallow cumuli over the ARM site. The modified version of WRF does a much better job predicting the cloud fraction and the downwelling shortwave irradiance thancontrol simulations utilizing the default Kain-Fritsch scheme. The modified scheme includes a number of additional free parameters, including the number and size of bins used to define the PDF, the minimum frequency of a bin within the PDF before that bin is considered for shallow clouds to form, and the critical cumulative frequency of bins required to trigger deep convection. A series of tests were undertaken to evaluate the sensitivity of the simulations to these parameters. Overall, the scheme was found to be relatively insensitive to each of the parameters.« less

  15. Feasibility of community-based health insurance in rural tropical Ecuador.

    PubMed

    Eckhardt, Martin; Forsberg, Birger Carl; Wolf, Dorothee; Crespo-Burgos, Antonio

    2011-03-01

    The main objective of this study was to assess people's willingness to join a community-based health insurance (CHI) model in El Páramo, a rural area in Ecuador, and to determine factors influencing this willingness. A second objective was to identify people's understanding and attitudes toward the presented CHI model. A cross-sectional survey was carried out using a structured questionnaire. Of an estimated 829 households, 210 were randomly selected by two-stage cluster sampling. Attitudes toward the scheme were assessed. Information on factors possibly influencing willingness to join was collected and related to the willingness to join. To gain an insight into a respondent's possible ability to pay, health care expenditure on the last illness episode was assessed. Feasibility was defined as at least 50% of household heads willing to join the scheme. Willingness to join the CHI model for US$30 per year was 69.3%. With affiliation, 92.2% of interviewees stated that they would visit the local health facility more often. Willingness to join was found to be negatively associated with education. Other variables showed no significant association with willingness to join. The study showed a positive attitude toward the CHI scheme. Substantial health care expenditures on the last illness episode were documented. The investigation concludes that CHI in the study region is feasible. However, enrollments are likely to be lower than the stated willingness to join. Still, a CHI scheme should present an interesting financing alternative in rural areas where services are scarce and difficult to sustain.

  16. Linking payment to health outcomes: a taxonomy and examination of performance-based reimbursement schemes between healthcare payers and manufacturers.

    PubMed

    Carlson, Josh J; Sullivan, Sean D; Garrison, Louis P; Neumann, Peter J; Veenstra, David L

    2010-08-01

    To identify, categorize and examine performance-based health outcomes reimbursement schemes for medical technology. We performed a review of performance-based health outcomes reimbursement schemes over the past 10 years (7/98-010/09) using publicly available databases, web and grey literature searches, and input from healthcare reimbursement experts. We developed a taxonomy of scheme types by inductively organizing the schemes identified according to the timing, execution, and health outcomes measured in the schemes. Our search yielded 34 coverage with evidence development schemes, 10 conditional treatment continuation schemes, and 14 performance-linked reimbursement schemes. The majority of schemes are in Europe and Australia, with an increasing number in Canada and the U.S. These schemes have the potential to alter the reimbursement and pricing landscape for medical technology, but significant challenges, including high transaction costs and insufficient information systems, may limit their long-term impact. Future studies regarding experiences and outcomes of implemented schemes are necessary. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  17. Pay-for-performance in disease management: a systematic review of the literature.

    PubMed

    de Bruin, Simone R; Baan, Caroline A; Struijs, Jeroen N

    2011-10-14

    Pay-for-performance (P4P) is increasingly implemented in the healthcare system to encourage improvements in healthcare quality. P4P is a payment model that rewards healthcare providers for meeting pre-established targets for delivery of healthcare services by financial incentives. Based on their performance, healthcare providers receive either additional or reduced payment. Currently, little is known about P4P schemes intending to improve delivery of chronic care through disease management. The objectives of this paper are therefore to provide an overview of P4P schemes used to stimulate delivery of chronic care through disease management and to provide insight into their effects on healthcare quality and costs. A systematic PubMed search was performed for English language papers published between 2000 and 2010 describing P4P schemes related to the implementation of disease management. Wagner's chronic care model was used to make disease management operational. Eight P4P schemes were identified, introduced in the USA (n = 6), Germany (n = 1), and Australia (n = 1). Five P4P schemes were part of a larger scheme of interventions to improve quality of care, whereas three P4P schemes were solely implemented. Most financial incentives were rewards, selective, and granted on the basis of absolute performance. More variation was found in incented entities and the basis for providing incentives. Information about motivation, certainty, size, frequency, and duration of the financial incentives was generally limited. Five studies were identified that evaluated the effects of P4P on healthcare quality. Most studies showed positive effects of P4P on healthcare quality. No studies were found that evaluated the effects of P4P on healthcare costs. The number of P4P schemes to encourage disease management is limited. Hardly any information is available about the effects of such schemes on healthcare quality and costs. © 2011 de Bruin et al; licensee BioMed Central Ltd.

  18. Pay-for-performance in disease management: a systematic review of the literature

    PubMed Central

    2011-01-01

    Background Pay-for-performance (P4P) is increasingly implemented in the healthcare system to encourage improvements in healthcare quality. P4P is a payment model that rewards healthcare providers for meeting pre-established targets for delivery of healthcare services by financial incentives. Based on their performance, healthcare providers receive either additional or reduced payment. Currently, little is known about P4P schemes intending to improve delivery of chronic care through disease management. The objectives of this paper are therefore to provide an overview of P4P schemes used to stimulate delivery of chronic care through disease management and to provide insight into their effects on healthcare quality and costs. Methods A systematic PubMed search was performed for English language papers published between 2000 and 2010 describing P4P schemes related to the implementation of disease management. Wagner's chronic care model was used to make disease management operational. Results Eight P4P schemes were identified, introduced in the USA (n = 6), Germany (n = 1), and Australia (n = 1). Five P4P schemes were part of a larger scheme of interventions to improve quality of care, whereas three P4P schemes were solely implemented. Most financial incentives were rewards, selective, and granted on the basis of absolute performance. More variation was found in incented entities and the basis for providing incentives. Information about motivation, certainty, size, frequency, and duration of the financial incentives was generally limited. Five studies were identified that evaluated the effects of P4P on healthcare quality. Most studies showed positive effects of P4P on healthcare quality. No studies were found that evaluated the effects of P4P on healthcare costs. Conclusion The number of P4P schemes to encourage disease management is limited. Hardly any information is available about the effects of such schemes on healthcare quality and costs. PMID:21999234

  19. Selectively Encrypted Pull-Up Based Watermarking of Biometric data

    NASA Astrophysics Data System (ADS)

    Shinde, S. A.; Patel, Kushal S.

    2012-10-01

    Biometric authentication systems are becoming increasingly popular due to their potential usage in information security. However, digital biometric data (e.g. thumb impression) are themselves vulnerable to security attacks. There are various methods are available to secure biometric data. In biometric watermarking the data are embedded in an image container and are only retrieved if the secrete key is available. This container image is encrypted to have more security against the attack. As wireless devices are equipped with battery as their power supply, they have limited computational capabilities; therefore to reduce energy consumption we use the method of selective encryption of container image. The bit pull-up-based biometric watermarking scheme is based on amplitude modulation and bit priority which reduces the retrieval error rate to great extent. By using selective Encryption mechanism we expect more efficiency in time at the time of encryption as well as decryption. Significant reduction in error rate is expected to be achieved by the bit pull-up method.

  20. A Theoretical Analysis of a New Polarimetric Optical Scheme for Glucose Sensing in the Human Eye

    NASA Technical Reports Server (NTRS)

    Rovati, Luigi L.; Boeckle, Stefan; Ansari, Rafat R.; Salzman, Jack A. (Technical Monitor)

    2002-01-01

    The challenging task of in vivo polarimetric glucose sensing is the identification and selection of a scheme to optically access the aqueous humor of the human eye. In this short communication an earlier approach of Cote et al. is theoretically compared with our new optical scheme. Simulations of the new scheme using the eye model of Navarro, suggest that the new optical geometry can overcome the limitations of the previous approach for in vivo measurements of glucose in a human eye.

  1. The atmospheric boundary layer — advances in knowledge and application

    NASA Astrophysics Data System (ADS)

    Garratt, J. R.; Hess, G. D.; Physick, W. L.; Bougeault, P.

    1996-02-01

    We summarise major activities and advances in boundary-layer knowledge in the 25 years since 1970, with emphasis on the application of this knowledge to surface and boundary-layer parametrisation schemes in numerical models of the atmosphere. Progress in three areas is discussed: (i) the mesoscale modelling of selected phenomena; (ii) numerical weather prediction; and (iii) climate simulations. Future trends are identified, including the incorporation into models of advanced cloud schemes and interactive canopy schemes, and the nesting of high resolution boundary-layer schemes in global climate models.

  2. Joint Remote State Preparation Schemes for Two Different Quantum States Selectively

    NASA Astrophysics Data System (ADS)

    Shi, Jin

    2018-05-01

    The scheme for joint remote state preparation of two different one-qubit states according to requirement is proposed by using one four-dimensional spatial-mode-entangled KLM state as quantum channel. The scheme for joint remote state preparation of two different two-qubit states according to requirement is also proposed by using one four-dimensional spatial-mode-entangled KLM state and one three-dimensional spatial-mode-entangled GHZ state as quantum channels. Quantum non-demolition measurement, Hadamard gate operation, projective measurement and unitary transformation are included in the schemes.

  3. A Microbial Assessment Scheme to measure microbial performance of Food Safety Management Systems.

    PubMed

    Jacxsens, L; Kussaga, J; Luning, P A; Van der Spiegel, M; Devlieghere, F; Uyttendaele, M

    2009-08-31

    A Food Safety Management System (FSMS) implemented in a food processing industry is based on Good Hygienic Practices (GHP), Hazard Analysis Critical Control Point (HACCP) principles and should address both food safety control and assurance activities in order to guarantee food safety. One of the most emerging challenges is to assess the performance of a present FSMS. The objective of this work is to explain the development of a Microbial Assessment Scheme (MAS) as a tool for a systematic analysis of microbial counts in order to assess the current microbial performance of an implemented FSMS. It is assumed that low numbers of microorganisms and small variations in microbial counts indicate an effective FSMS. The MAS is a procedure that defines the identification of critical sampling locations, the selection of microbiological parameters, the assessment of sampling frequency, the selection of sampling method and method of analysis, and finally data processing and interpretation. Based on the MAS assessment, microbial safety level profiles can be derived, indicating which microorganisms and to what extent they contribute to food safety for a specific food processing company. The MAS concept is illustrated with a case study in the pork processing industry, where ready-to-eat meat products are produced (cured, cooked ham and cured, dried bacon).

  4. A Computer Clone of Human Expert for Mobility Management Scheme (E-MMS): Step toward Green Transportation

    NASA Astrophysics Data System (ADS)

    Resdiansyah; O. K Rahmat, R. A.; Ismail, A.

    2018-03-01

    Green transportation refers to a sustainable transport that gives the least impact in terms of social and environmental but at the same time is able to supply energy sources globally that includes non-motorized transport strategies deployment to promote healthy lifestyles, also known as Mobility Management Scheme (MMS). As construction of road infrastructure cannot help solve the problem of congestion, past research has shown that MMS is an effective measure to mitigate congestion and to achieve green transportation. MMS consists of different strategies and policies that subdivided into categories according to how they are able to influence travel behaviour. Appropriate selection of mobility strategies will ensure its effectiveness in mitigating congestion problems. Nevertheless, determining appropriate strategies requires human expert and depends on a number of success factors. This research has successfully developed a computer clone system based on human expert, called E-MMS. The process of knowledge acquisition for MMS strategies and the next following process to selection of strategy has been encode in a knowledge-based system using a shell expert system. The newly developed computer cloning system was successfully verified, validated and evaluated (VV&E) by comparing the result output with the real transportation expert recommendation in which the findings suggested Introduction

  5. Linear segmentation algorithm for detecting layer boundary with lidar.

    PubMed

    Mao, Feiyue; Gong, Wei; Logan, Timothy

    2013-11-04

    The automatic detection of aerosol- and cloud-layer boundary (base and top) is important in atmospheric lidar data processing, because the boundary information is not only useful for environment and climate studies, but can also be used as input for further data processing. Previous methods have demonstrated limitations in defining the base and top, window-size setting, and have neglected the in-layer attenuation. To overcome these limitations, we present a new layer detection scheme for up-looking lidars based on linear segmentation with a reasonable threshold setting, boundary selecting, and false positive removing strategies. Preliminary results from both real and simulated data show that this algorithm cannot only detect the layer-base as accurate as the simple multi-scale method, but can also detect the layer-top more accurately than that of the simple multi-scale method. Our algorithm can be directly applied to uncalibrated data without requiring any additional measurements or window size selections.

  6. Reduction of false-positives in a CAD scheme for automated detection of architectural distortion in digital mammography

    NASA Astrophysics Data System (ADS)

    de Oliveira, Helder C. R.; Mencattini, Arianna; Casti, Paola; Martinelli, Eugenio; di Natale, Corrado; Catani, Juliana H.; de Barros, Nestor; Melo, Carlos F. E.; Gonzaga, Adilson; Vieira, Marcelo A. C.

    2018-02-01

    This paper proposes a method to reduce the number of false-positives (FP) in a computer-aided detection (CAD) scheme for automated detection of architectural distortion (AD) in digital mammography. AD is a subtle contraction of breast parenchyma that may represent an early sign of breast cancer. Due to its subtlety and variability, AD is more difficult to detect compared to microcalcifications and masses, and is commonly found in retrospective evaluations of false-negative mammograms. Several computer-based systems have been proposed for automated detection of AD in breast images. The usual approach is automatically detect possible sites of AD in a mammographic image (segmentation step) and then use a classifier to eliminate the false-positives and identify the suspicious regions (classification step). This paper focus on the optimization of the segmentation step to reduce the number of FPs that is used as input to the classifier. The proposal is to use statistical measurements to score the segmented regions and then apply a threshold to select a small quantity of regions that should be submitted to the classification step, improving the detection performance of a CAD scheme. We evaluated 12 image features to score and select suspicious regions of 74 clinical Full-Field Digital Mammography (FFDM). All images in this dataset contained at least one region with AD previously marked by an expert radiologist. The results showed that the proposed method can reduce the false positives of the segmentation step of the CAD scheme from 43.4 false positives (FP) per image to 34.5 FP per image, without increasing the number of false negatives.

  7. A bandwidth efficient coding scheme for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Pietrobon, Steven S.; Costello, Daniel J., Jr.

    1991-01-01

    As a demonstration of the performance capabilities of trellis codes using multidimensional signal sets, a Viterbi decoder was designed. The choice of code was based on two factors. The first factor was its application as a possible replacement for the coding scheme currently used on the Hubble Space Telescope (HST). The HST at present uses the rate 1/3 nu = 6 (with 2 (exp nu) = 64 states) convolutional code with Binary Phase Shift Keying (BPSK) modulation. With the modulator restricted to a 3 Msym/s, this implies a data rate of only 1 Mbit/s, since the bandwidth efficiency K = 1/3 bit/sym. This is a very bandwidth inefficient scheme, although the system has the advantage of simplicity and large coding gain. The basic requirement from NASA was for a scheme that has as large a K as possible. Since a satellite channel was being used, 8PSK modulation was selected. This allows a K of between 2 and 3 bit/sym. The next influencing factor was INTELSAT's intention of transmitting the SONET 155.52 Mbit/s standard data rate over the 72 MHz transponders on its satellites. This requires a bandwidth efficiency of around 2.5 bit/sym. A Reed-Solomon block code is used as an outer code to give very low bit error rates (BER). A 16 state rate 5/6, 2.5 bit/sym, 4D-8PSK trellis code was selected. This code has reasonable complexity and has a coding gain of 4.8 dB compared to uncoded 8PSK (2). This trellis code also has the advantage that it is 45 deg rotationally invariant. This means that the decoder needs only to synchronize to one of the two naturally mapped 8PSK signals in the signal set.

  8. Last-position elimination-based learning automata.

    PubMed

    Zhang, Junqi; Wang, Cheng; Zhou, MengChu

    2014-12-01

    An update scheme of the state probability vector of actions is critical for learning automata (LA). The most popular is the pursuit scheme that pursues the estimated optimal action and penalizes others. This paper proposes a reverse philosophy that leads to last-position elimination-based learning automata (LELA). The action graded last in terms of the estimated performance is penalized by decreasing its state probability and is eliminated when its state probability becomes zero. All active actions, that is, actions with nonzero state probability, equally share the penalized state probability from the last-position action at each iteration. The proposed LELA is characterized by the relaxed convergence condition for the optimal action, the accelerated step size of the state probability update scheme for the estimated optimal action, and the enriched sampling for the estimated nonoptimal actions. The proof of the ϵ-optimal property for the proposed algorithm is presented. Last-position elimination is a widespread philosophy in the real world and has proved to be also helpful for the update scheme of the learning automaton via the simulations of well-known benchmark environments. In the simulations, two versions of the LELA, using different selection strategies of the last action, are compared with the classical pursuit algorithms Discretized Pursuit Reward-Inaction (DP(RI)) and Discretized Generalized Pursuit Algorithm (DGPA). Simulation results show that the proposed schemes achieve significantly faster convergence and higher accuracy than the classical ones. Specifically, the proposed schemes reduce the interval to find the best parameter for a specific environment in the classical pursuit algorithms. Thus, they can have their parameter tuning easier to perform and can save much more time when applied to a practical case. Furthermore, the convergence curves and the corresponding variance coefficient curves of the contenders are illustrated to characterize their essential differences and verify the analysis results of the proposed algorithms.

  9. Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis on Over 10,000 Cores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Rice, Mark J.

    Contingency analysis studies are necessary to assess the impact of possible power system component failures. The results of the contingency analysis are used to ensure the grid reliability, and in power market operation for the feasibility test of market solutions. Currently, these studies are performed in real time based on the current operating conditions of the grid with a set of pre-selected contingency list, which might result in overlooking some critical contingencies caused by variable system status. To have a complete picture of a power grid, more contingencies need to be studied to improve grid reliability. High-performance computing techniques holdmore » the promise of being able to perform the analysis for more contingency cases within a much shorter time frame. This paper evaluates the performance of counter-based dynamic load balancing schemes for a massive contingency analysis program on 10,000+ cores. One million N-2 contingency analysis cases with a Western Electricity Coordinating Council power grid model have been used to demonstrate the performance. The speedup of 3964 with 4096 cores and 7877 with 10240 cores are obtained. This paper reports the performance of the load balancing scheme with a single counter and two counters, describes disk I/O issues, and discusses other potential techniques for further improving the performance.« less

  10. Incentivising effort in governance of public hospitals: Development of a delegation-based alternative to activity-based remuneration.

    PubMed

    Søgaard, Rikke; Kristensen, Søren Rud; Bech, Mickael

    2015-08-01

    This paper is a first examination of the development of an alternative to activity-based remuneration in public hospitals, which is currently being tested at nine hospital departments in a Danish region. The objective is to examine the process of delegating the authority of designing new incentive schemes from the principal (the regional government) to the agents (the hospital departments). We adopt a theoretical framework where, when deciding about delegation, the principal should trade off an initiative effect against the potential cost of loss of control. The initiative effect is evaluated by studying the development process and the resulting incentive schemes for each of the departments. Similarly, the potential cost of loss of control is evaluated by assessing the congruence between focus of the new incentive schemes and the principal's objectives. We observe a high impact of the effort incentive in the form of innovative and ambitious selection of projects by the agents, leading to nine very different solutions across departments. However, we also observe some incongruence between the principal's stated objectives and the revealed private interests of the agents. Although this is a baseline study involving high uncertainty about the future, the findings point at some issues with the delegation approach that could lead to inefficient outcomes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Investigation of Spray Cooling Schemes for Dynamic Thermal Management

    NASA Astrophysics Data System (ADS)

    Yata, Vishnu Vardhan Reddy

    This study aims to investigate variable flow and intermittent flow spray cooling characteristics for efficiency improvement in active two-phase thermal management systems. Variable flow spray cooling scheme requires control of pump input voltage (or speed), while intermittent flow spray cooling scheme requires control of solenoid valve duty cycle and frequency. Several testing scenarios representing dynamic heat load conditions are implemented to characterize the overall performance of variable flow and intermittent flow spray cooling cases in comparison with the reference, steady flow spray cooling case with constant flowrate, continuous spray cooling. Tests are conducted on a small-scale, closed loop spray cooling system featuring a pressure atomized spray nozzle. HFE-7100 dielectric liquid is selected as the working fluid. Two types of test samples are prepared on 10 mm x 10 mm x 2 mm copper substrates with matching size thick film resistors attached onto the opposite side, to generate heat and simulate high heat flux electronic devices. The test samples include: (i) plain, smooth surface, and (ii) microporous surface featuring 100 ?m thick copper-based coating prepared by dual stage electroplating technique. Experimental conditions involve HFE-7100 at atmospheric pressure and 30°C and 10°C subcooling. Steady flow spray cooling tests are conducted at flow rates of 2-5 ml/cm2.s, by controlling the heat flux in increasing steps, and recording the corresponding steady-state temperatures to obtain cooling curves in the form of surface superheat vs. heat flux. Variable flow and intermittent flow spray cooling tests are done at selected flowrate and subcooling conditions to investigate the effects of dynamic flow conditions on maintaining the target surface temperatures defined based on reference steady flow spray cooling performance.

  12. Immunity-based detection, identification, and evaluation of aircraft sub-system failures

    NASA Astrophysics Data System (ADS)

    Moncayo, Hever Y.

    This thesis describes the design, development, and flight-simulation testing of an integrated Artificial Immune System (AIS) for detection, identification, and evaluation of a wide variety of sensor, actuator, propulsion, and structural failures/damages including the prediction of the achievable states and other limitations on performance and handling qualities. The AIS scheme achieves high detection rate and low number of false alarms for all the failure categories considered. Data collected using a motion-based flight simulator are used to define the self for an extended sub-region of the flight envelope. The NASA IFCS F-15 research aircraft model is used and represents a supersonic fighter which include model following adaptive control laws based on non-linear dynamic inversion and artificial neural network augmentation. The flight simulation tests are designed to analyze and demonstrate the performance of the immunity-based aircraft failure detection, identification and evaluation (FDIE) scheme. A general robustness analysis is also presented by determining the achievable limits for a desired performance in the presence of atmospheric perturbations. For the purpose of this work, the integrated AIS scheme is implemented based on three main components. The first component performs the detection when one of the considered failures is present in the system. The second component consists in the identification of the failure category and the classification according to the failed element. During the third phase a general evaluation of the failure is performed with the estimation of the magnitude/severity of the failure and the prediction of its effect on reducing the flight envelope of the aircraft system. Solutions and alternatives to specific design issues of the AIS scheme, such as data clustering and empty space optimization, data fusion and duplication removal, definition of features, dimensionality reduction, and selection of cluster/detector shape are also analyzed in this thesis. They showed to have an important effect on detection performance and are a critical aspect when designing the configuration of the AIS. The results presented in this thesis show that the AIS paradigm addresses directly the complexity and multi-dimensionality associated with a damaged aircraft dynamic response and provides the tools necessary for a comprehensive/integrated solution to the FDIE problem. Excellent detection, identification, and evaluation performance has been recorded for all types of failures considered. The implementation of the proposed AIS-based scheme can potentially have a significant impact on the safety of aircraft operation. The output information obtained from the scheme will be useful to increase pilot situational awareness and determine automated compensation.

  13. Detection of small bowel tumors in capsule endoscopy frames using texture analysis based on the discrete wavelet transform.

    PubMed

    Barbosa, Daniel J C; Ramos, Jaime; Lima, Carlos S

    2008-01-01

    Capsule endoscopy is an important tool to diagnose tumor lesions in the small bowel. The capsule endoscopic images possess vital information expressed by color and texture. This paper presents an approach based in the textural analysis of the different color channels, using the wavelet transform to select the bands with the most significant texture information. A new image is then synthesized from the selected wavelet bands, trough the inverse wavelet transform. The features of each image are based on second-order textural information, and they are used in a classification scheme using a multilayer perceptron neural network. The proposed methodology has been applied in real data taken from capsule endoscopic exams and reached 98.7% sensibility and 96.6% specificity. These results support the feasibility of the proposed algorithm.

  14. Understanding security failures of two authentication and key agreement schemes for telecare medicine information systems.

    PubMed

    Mishra, Dheerendra

    2015-03-01

    Smart card based authentication and key agreement schemes for telecare medicine information systems (TMIS) enable doctors, nurses, patients and health visitors to use smart cards for secure login to medical information systems. In recent years, several authentication and key agreement schemes have been proposed to present secure and efficient solution for TMIS. Most of the existing authentication schemes for TMIS have either higher computation overhead or are vulnerable to attacks. To reduce the computational overhead and enhance the security, Lee recently proposed an authentication and key agreement scheme using chaotic maps for TMIS. Xu et al. also proposed a password based authentication and key agreement scheme for TMIS using elliptic curve cryptography. Both the schemes provide better efficiency from the conventional public key cryptography based schemes. These schemes are important as they present an efficient solution for TMIS. We analyze the security of both Lee's scheme and Xu et al.'s schemes. Unfortunately, we identify that both the schemes are vulnerable to denial of service attack. To understand the security failures of these cryptographic schemes which are the key of patching existing schemes and designing future schemes, we demonstrate the security loopholes of Lee's scheme and Xu et al.'s scheme in this paper.

  15. Robust and efficient biometrics based password authentication scheme for telecare medicine information systems using extended chaotic maps.

    PubMed

    Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Xie, Dong; Yang, Yixian

    2015-06-01

    The Telecare Medicine Information Systems (TMISs) provide an efficient communicating platform supporting the patients access health-care delivery services via internet or mobile networks. Authentication becomes an essential need when a remote patient logins into the telecare server. Recently, many extended chaotic maps based authentication schemes using smart cards for TMISs have been proposed. Li et al. proposed a secure smart cards based authentication scheme for TMISs using extended chaotic maps based on Lee's and Jiang et al.'s scheme. In this study, we show that Li et al.'s scheme has still some weaknesses such as violation the session key security, vulnerability to user impersonation attack and lack of local verification. To conquer these flaws, we propose a chaotic maps and smart cards based password authentication scheme by applying biometrics technique and hash function operations. Through the informal and formal security analyses, we demonstrate that our scheme is resilient possible known attacks including the attacks found in Li et al.'s scheme. As compared with the previous authentication schemes, the proposed scheme is more secure and efficient and hence more practical for telemedical environments.

  16. Conductivity based on selective etch for GaN devices and applications thereof

    DOEpatents

    Zhang, Yu; Sun, Qian; Han, Jung

    2015-12-08

    This invention relates to methods of generating NP gallium nitride (GaN) across large areas (>1 cm.sup.2) with controlled pore diameters, pore density, and porosity. Also disclosed are methods of generating novel optoelectronic devices based on porous GaN. Additionally a layer transfer scheme to separate and create free-standing crystalline GaN thin layers is disclosed that enables a new device manufacturing paradigm involving substrate recycling. Other disclosed embodiments of this invention relate to fabrication of GaN based nanocrystals and the use of NP GaN electrodes for electrolysis, water splitting, or photosynthetic process applications.

  17. The effect of individually-induced processes on image-based overlay and diffraction-based overlay

    NASA Astrophysics Data System (ADS)

    Oh, SeungHwa; Lee, Jeongjin; Lee, Seungyoon; Hwang, Chan; Choi, Gilheyun; Kang, Ho-Kyu; Jung, EunSeung

    2014-04-01

    In this paper, set of wafers with separated processes was prepared and overlay measurement result was compared in two methods; IBO and DBO. Based on the experimental result, theoretical approach of relationship between overlay mark deformation and overlay variation is presented. Moreover, overlay reading simulation was used in verification and prediction of overlay variation due to deformation of overlay mark caused by induced processes. Through this study, understanding of individual process effects on overlay measurement error is given. Additionally, guideline of selecting proper overlay measurement scheme for specific layer is presented.

  18. Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps.

    PubMed

    Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han; Lin, Tsung-Hung

    2017-01-01

    A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes.

  19. Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps

    PubMed Central

    Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han

    2017-01-01

    A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes. PMID:28759615

  20. Application of a net-based baseline correction scheme to strong-motion records of the 2011 Mw 9.0 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Wang, Rongjiang; Zhang, Yong; Walter, Thomas R.

    2014-06-01

    The description of static displacements associated with earthquakes is traditionally achieved using GPS, EDM or InSAR data. In addition, displacement histories can be derived from strong-motion records, allowing an improvement of geodetic networks at a high sampling rate and a better physical understanding of earthquake processes. Strong-motion records require a correction procedure appropriate for baseline shifts that may be caused by rotational motion, tilting and other instrumental effects. Common methods use an empirical bilinear correction on the velocity seismograms integrated from the strong-motion records. In this study, we overcome the weaknesses of an empirically based bilinear baseline correction scheme by using a net-based criterion to select the timing parameters. This idea is based on the physical principle that low-frequency seismic waveforms at neighbouring stations are coherent if the interstation distance is much smaller than the distance to the seismic source. For a dense strong-motion network, it is plausible to select the timing parameters so that the correlation coefficient between the velocity seismograms of two neighbouring stations is maximized after the baseline correction. We applied this new concept to the KiK-Net and K-Net strong-motion data available for the 2011 Mw 9.0 Tohoku earthquake. We compared the derived coseismic static displacement with high-quality GPS data, and with the results obtained using empirical methods. The results show that the proposed net-based approach is feasible and more robust than the individual empirical approaches. The outliers caused by unknown problems in the measurement system can be easily detected and quantified.

  1. A potential application in quantum networks—Deterministic quantum operation sharing schemes with Bell states

    NASA Astrophysics Data System (ADS)

    Zhang, KeJia; Zhang, Long; Song, TingTing; Yang, YingHui

    2016-06-01

    In this paper, we propose certain different design ideas on a novel topic in quantum cryptography — quantum operation sharing (QOS). Following these unique ideas, three QOS schemes, the "HIEC" (The scheme whose messages are hidden in the entanglement correlation), "HIAO" (The scheme whose messages are hidden with the assistant operations) and "HIMB" (The scheme whose messages are hidden in the selected measurement basis), have been presented to share the single-qubit operations determinately on target states in a remote node. These schemes only require Bell states as quantum resources. Therefore, they can be directly applied in quantum networks, since Bell states are considered the basic quantum channels in quantum networks. Furthermore, after analyse on the security and resource consumptions, the task of QOS can be achieved securely and effectively in these schemes.

  2. On-board closed-loop congestion control for satellite based packet switching networks

    NASA Technical Reports Server (NTRS)

    Chu, Pong P.; Ivancic, William D.; Kim, Heechul

    1993-01-01

    NASA LeRC is currently investigating a satellite architecture that incorporates on-board packet switching capability. Because of the statistical nature of packet switching, arrival traffic may fluctuate and thus it is necessary to integrate congestion control mechanism as part of the on-board processing unit. This study focuses on the closed-loop reactive control. We investigate the impact of the long propagation delay on the performance and propose a scheme to overcome the problem. The scheme uses a global feedback signal to regulate the packet arrival rate of ground stations. In this scheme, the satellite continuously broadcasts the status of its output buffer and the ground stations respond by selectively discarding packets or by tagging the excessive packets as low-priority. The two schemes are evaluated by theoretical queuing analysis and simulation. The former is used to analyze the simplified model and to determine the basic trends and bounds, and the later is used to assess the performance of a more realistic system and to evaluate the effectiveness of more sophisticated control schemes. The results show that the long propagation delay makes the closed-loop congestion control less responsive. The broadcasted information can only be used to extract statistical information. The discarding scheme needs carefully-chosen status information and reduction function, and normally requires a significant amount of ground discarding to reduce the on-board packet loss probability. The tagging scheme is more effective since it tolerates more uncertainties and allows a larger margin of error in status information. It can protect the high-priority packets from excessive loss and fully utilize the downlink bandwidth at the same time.

  3. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme

    PubMed Central

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.’s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks. PMID:26866606

  4. The Effects of Run-of-River Hydroelectric Power Schemes on Fish Community Composition in Temperate Streams and Rivers

    PubMed Central

    2016-01-01

    The potential environmental impacts of large-scale storage hydroelectric power (HEP) schemes have been well-documented in the literature. In Europe, awareness of these potential impacts and limited opportunities for politically-acceptable medium- to large-scale schemes, have caused attention to focus on smaller-scale HEP schemes, particularly run-of-river (ROR) schemes, to contribute to meeting renewable energy targets. Run-of-river HEP schemes are often presumed to be less environmentally damaging than large-scale storage HEP schemes. However, there is currently a lack of peer-reviewed studies on their physical and ecological impact. The aim of this article was to investigate the effects of ROR HEP schemes on communities of fish in temperate streams and rivers, using a Before-After, Control-Impact (BACI) study design. The study makes use of routine environmental surveillance data collected as part of long-term national and international monitoring programmes at 23 systematically-selected ROR HEP schemes and 23 systematically-selected paired control sites. Six area-normalised metrics of fish community composition were analysed using a linear mixed effects model (number of species, number of fish, number of Atlantic salmon—Salmo salar, number of >1 year old Atlantic salmon, number of brown trout—Salmo trutta, and number of >1 year old brown trout). The analyses showed that there was a statistically significant effect (p<0.05) of ROR HEP construction and operation on the number of species. However, no statistically significant effects were detected on the other five metrics of community composition. The implications of these findings are discussed in this article and recommendations are made for best-practice study design for future fish community impact studies. PMID:27191717

  5. The Effects of Run-of-River Hydroelectric Power Schemes on Fish Community Composition in Temperate Streams and Rivers.

    PubMed

    Bilotta, Gary S; Burnside, Niall G; Gray, Jeremy C; Orr, Harriet G

    2016-01-01

    The potential environmental impacts of large-scale storage hydroelectric power (HEP) schemes have been well-documented in the literature. In Europe, awareness of these potential impacts and limited opportunities for politically-acceptable medium- to large-scale schemes, have caused attention to focus on smaller-scale HEP schemes, particularly run-of-river (ROR) schemes, to contribute to meeting renewable energy targets. Run-of-river HEP schemes are often presumed to be less environmentally damaging than large-scale storage HEP schemes. However, there is currently a lack of peer-reviewed studies on their physical and ecological impact. The aim of this article was to investigate the effects of ROR HEP schemes on communities of fish in temperate streams and rivers, using a Before-After, Control-Impact (BACI) study design. The study makes use of routine environmental surveillance data collected as part of long-term national and international monitoring programmes at 23 systematically-selected ROR HEP schemes and 23 systematically-selected paired control sites. Six area-normalised metrics of fish community composition were analysed using a linear mixed effects model (number of species, number of fish, number of Atlantic salmon-Salmo salar, number of >1 year old Atlantic salmon, number of brown trout-Salmo trutta, and number of >1 year old brown trout). The analyses showed that there was a statistically significant effect (p<0.05) of ROR HEP construction and operation on the number of species. However, no statistically significant effects were detected on the other five metrics of community composition. The implications of these findings are discussed in this article and recommendations are made for best-practice study design for future fish community impact studies.

  6. An efficient and provable secure revocable identity-based encryption scheme.

    PubMed

    Wang, Changji; Li, Yuan; Xia, Xiaonan; Zheng, Kangjia

    2014-01-01

    Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE) has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.

  7. iFeature: a python package and web server for features extraction and selection from protein and peptide sequences.

    PubMed

    Chen, Zhen; Zhao, Pei; Li, Fuyi; Leier, André; Marquez-Lago, Tatiana T; Wang, Yanan; Webb, Geoffrey I; Smith, A Ian; Daly, Roger J; Chou, Kuo-Chen; Song, Jiangning

    2018-03-08

    Structural and physiochemical descriptors extracted from sequence data have been widely used to represent sequences and predict structural, functional, expression and interaction profiles of proteins and peptides as well as DNAs/RNAs. Here, we present iFeature, a versatile Python-based toolkit for generating various numerical feature representation schemes for both protein and peptide sequences. iFeature is capable of calculating and extracting a comprehensive spectrum of 18 major sequence encoding schemes that encompass 53 different types of feature descriptors. It also allows users to extract specific amino acid properties from the AAindex database. Furthermore, iFeature integrates 12 different types of commonly used feature clustering, selection, and dimensionality reduction algorithms, greatly facilitating training, analysis, and benchmarking of machine-learning models. The functionality of iFeature is made freely available via an online web server and a stand-alone toolkit. http://iFeature.erc.monash.edu/; https://github.com/Superzchen/iFeature/. jiangning.song@monash.edu; kcchou@gordonlifescience.org; roger.daly@monash.edu. Supplementary data are available at Bioinformatics online.

  8. A knowledge-based approach to improving optimization techniques in system planning

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

  9. PLS-based quantitative structure-activity relationship for substituted benzamides of clebopride type. Application of experimental design in drug design.

    PubMed

    Norinder, U; Högberg, T

    1992-04-01

    The advantageous approach of using an experimentally designed training set as the basis for establishing a quantitative structure-activity relationship with good predictive capability is described. The training set was selected from a fractional factorial design scheme based on a principal component description of physico-chemical parameters of aromatic substituents. The derived model successfully predicts the activities of additional substituted benzamides of 6-methoxy-N-(4-piperidyl)salicylamide type. The major influence on activity of the 3-substituent is demonstrated.

  10. An orbital localization criterion based on the theory of "fuzzy" atoms.

    PubMed

    Alcoba, Diego R; Lain, Luis; Torre, Alicia; Bochicchio, Roberto C

    2006-04-15

    This work proposes a new procedure for localizing molecular and natural orbitals. The localization criterion presented here is based on the partitioning of the overlap matrix into atomic contributions within the theory of "fuzzy" atoms. Our approach has several advantages over other schemes: it is computationally inexpensive, preserves the sigma/pi-separability in planar systems and provides a straightforward interpretation of the resulting orbitals in terms of their localization indices and atomic occupancies. The corresponding algorithm has been implemented and its efficiency tested on selected molecular systems. (c) 2006 Wiley Periodicals, Inc.

  11. A provably-secure ECC-based authentication scheme for wireless sensor networks.

    PubMed

    Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho

    2014-11-06

    A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes.

  12. A Provably-Secure ECC-Based Authentication Scheme for Wireless Sensor Networks

    PubMed Central

    Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho

    2014-01-01

    A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes. PMID:25384009

  13. A soft-hard combination-based cooperative spectrum sensing scheme for cognitive radio networks.

    PubMed

    Do, Nhu Tri; An, Beongku

    2015-02-13

    In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT)-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of -15 dB). In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.

  14. Sensing of molecules using quantum dynamics

    PubMed Central

    Migliore, Agostino; Naaman, Ron; Beratan, David N.

    2015-01-01

    We design sensors where information is transferred between the sensing event and the actuator via quantum relaxation processes, through distances of a few nanometers. We thus explore the possibility of sensing using intrinsically quantum mechanical phenomena that are also at play in photobiology, bioenergetics, and information processing. Specifically, we analyze schemes for sensing based on charge transfer and polarization (electronic relaxation) processes. These devices can have surprising properties. Their sensitivity can increase with increasing separation between the sites of sensing (the receptor) and the actuator (often a solid-state substrate). This counterintuitive response and other quantum features give these devices favorable characteristics, such as enhanced sensitivity and selectivity. Using coherent phenomena at the core of molecular sensing presents technical challenges but also suggests appealing schemes for molecular sensing and information transfer in supramolecular structures. PMID:25911636

  15. Re-formulation and Validation of Cloud Microphysics Schemes

    NASA Astrophysics Data System (ADS)

    Wang, J.; Georgakakos, K. P.

    2007-12-01

    The research focuses on improving quantitative precipitation forecasts by removing significant uncertainties in current cloud microphysics schemes embedded in models such as WRF and MM5 and cloud-resolving models such as GCE. Reformulation of several production terms in these microphysics schemes was found necessary. When estimating four graupel production terms involved in the accretion between rain, snow and graupel, current microphysics schemes assumes that all raindrops and snow particles are falling at their appropriate mass-weighted mean terminal velocities and thus analytic solutions are able to be found for these production terms. Initial analysis and tests showed that these approximate analytic solutions give significant and systematic overestimates of these terms, and, thus, become one of major error sources of the graupel overproduction and associated extreme radar reflectivity in simulations. These results are corroborated by several reports. For example, the analytic solution overestimates the graupel production by collisions between raindrops and snow by up to 230%. The structure of "pure" snow (not rimed) and "pure graupel" (completely rimed) in current microphysics schemes excludes intermediate forms between "pure" snow and "pure" graupel and thus becomes a significant reason of graupel overproduction in hydrometeor simulations. In addition, the generation of the same density graupel by both the freezing of supercooled water and the riming of snow may cause underestimation of graupel production by freezing. A parameterization scheme of the riming degree of snow is proposed and then a dynamic fallspeed-diameter relationship and density- diameter relationship of rimed snow is assigned to graupel based on the diagnosed riming degree. To test if these new treatments can improve quantitative precipitation forecast, the Hurricane Katrina and a severe winter snowfall event in the Sierra Nevada Range are selected as case studies. A series of control simulation and sensitivity tests was conducted for these two cases. Two statistical methods are used to compare simulated radar reflectivity by the model with that detected by ground-based and airborne radar at different height levels. It was found that the changes made in current microphysical schemes improve QPF and microphysics simulation significantly.

  16. Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)

    2002-01-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.

  17. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  18. A Novel Passive Tracking Scheme Exploiting Geometric and Intercept Theorems

    PubMed Central

    Zhou, Biao; Sun, Chao; Ahn, Deockhyeon; Kim, Youngok

    2018-01-01

    Passive tracking aims to track targets without assistant devices, that is, device-free targets. Passive tracking based on Radio Frequency (RF) Tomography in wireless sensor networks has recently been addressed as an emerging field. The passive tracking scheme using geometric theorems (GTs) is one of the most popular RF Tomography schemes, because the GT-based method can effectively mitigate the demand for a high density of wireless nodes. In the GT-based tracking scheme, the tracking scenario is considered as a two-dimensional geometric topology and then geometric theorems are applied to estimate crossing points (CPs) of the device-free target on line-of-sight links (LOSLs), which reveal the target’s trajectory information in a discrete form. In this paper, we review existing GT-based tracking schemes, and then propose a novel passive tracking scheme by exploiting the Intercept Theorem (IT). To create an IT-based CP estimation scheme available in the noisy non-parallel LOSL situation, we develop the equal-ratio traverse (ERT) method. Finally, we analyze properties of three GT-based tracking algorithms and the performance of these schemes is evaluated experimentally under various trajectories, node densities, and noisy topologies. Analysis of experimental results shows that tracking schemes exploiting geometric theorems can achieve remarkable positioning accuracy even under rather a low density of wireless nodes. Moreover, the proposed IT scheme can provide generally finer tracking accuracy under even lower node density and noisier topologies, in comparison to other schemes. PMID:29562621

  19. [Incentive for Regional Risk Selection in the German Risk Structure Compensation Scheme].

    PubMed

    Wende, Danny

    2017-10-01

    The introduction of the new law GKV-FQWG strengthens the competition between statutory health insurance. If incentives for risk selection exist, they may force a battle for cheap customers. This study aims to document and discuss incentives for regional risk selection in the German risk structure compensation scheme. Identify regional autocorrelation with Moran's l on financial parameters of the risk structure compensation schema. Incentives for regional risk selection do indeed exist. The risk structure compensation schema reduces 91% of the effect and helps to reduce risk selection. Nevertheless, a connection between regional situation and competition could be shown (correlation: 69.5%). Only the integration of regional control variables into the risk compensation eliminates regional autocorrelation. The actual risk structure compensation is leading to regional inequalities and as a consequence to risk selection and distortion in competition. © Georg Thieme Verlag KG Stuttgart · New York.

  20. Gene-Based Multiclass Cancer Diagnosis with Class-Selective Rejections

    PubMed Central

    Jrad, Nisrine; Grall-Maës, Edith; Beauseroy, Pierre

    2009-01-01

    Supervised learning of microarray data is receiving much attention in recent years. Multiclass cancer diagnosis, based on selected gene profiles, are used as adjunct of clinical diagnosis. However, supervised diagnosis may hinder patient care, add expense or confound a result. To avoid this misleading, a multiclass cancer diagnosis with class-selective rejection is proposed. It rejects some patients from one, some, or all classes in order to ensure a higher reliability while reducing time and expense costs. Moreover, this classifier takes into account asymmetric penalties dependant on each class and on each wrong or partially correct decision. It is based on ν-1-SVM coupled with its regularization path and minimizes a general loss function defined in the class-selective rejection scheme. The state of art multiclass algorithms can be considered as a particular case of the proposed algorithm where the number of decisions is given by the classes and the loss function is defined by the Bayesian risk. Two experiments are carried out in the Bayesian and the class selective rejection frameworks. Five genes selected datasets are used to assess the performance of the proposed method. Results are discussed and accuracies are compared with those computed by the Naive Bayes, Nearest Neighbor, Linear Perceptron, Multilayer Perceptron, and Support Vector Machines classifiers. PMID:19584932

Top