Sample records for reduction scheme based

  1. Digital Noise Reduction: An Overview

    PubMed Central

    Bentler, Ruth; Chiou, Li-Kuei

    2006-01-01

    Digital noise reduction schemes are being used in most hearing aids currently marketed. Unlike the earlier analog schemes, these manufacturer-specific algorithms are developed to acoustically analyze the incoming signal and alter the gain/output characteristics according to their predetermined rules. Although most are modulation-based schemes (ie, differentiating speech from noise based on temporal characteristics), spectral subtraction techniques are being applied as well. The purpose of this article is to overview these schemes in terms of their differences and similarities. PMID:16959731

  2. Genetic progress in multistage dairy cattle breeding schemes using genetic markers.

    PubMed

    Schrooten, C; Bovenhuis, H; van Arendonk, J A M; Bijma, P

    2005-04-01

    The aim of this paper was to explore general characteristics of multistage breeding schemes and to evaluate multistage dairy cattle breeding schemes that use information on quantitative trait loci (QTL). Evaluation was either for additional genetic response or for reduction in number of progeny-tested bulls while maintaining the same response. The reduction in response in multistage breeding schemes relative to comparable single-stage breeding schemes (i.e., with the same overall selection intensity and the same amount of information in the final stage of selection) depended on the overall selection intensity, the selection intensity in the various stages of the breeding scheme, and the ratio of the accuracies of selection in the various stages of the breeding scheme. When overall selection intensity was constant, reduction in response increased with increasing selection intensity in the first stage. The decrease in response was highest in schemes with lower overall selection intensity. Reduction in response was limited in schemes with low to average emphasis on first-stage selection, especially if the accuracy of selection in the first stage was relatively high compared with the accuracy in the final stage. Closed nucleus breeding schemes in dairy cattle that use information on QTL were evaluated by deterministic simulation. In the base scheme, the selection index consisted of pedigree information and own performance (dams), or pedigree information and performance of 100 daughters (sires). In alternative breeding schemes, information on a QTL was accounted for by simulating an additional index trait. The fraction of the variance explained by the QTL determined the correlation between the additional index trait and the breeding goal trait. Response in progeny test schemes relative to a base breeding scheme without QTL information ranged from +4.5% (QTL explaining 5% of the additive genetic variance) to +21.2% (QTL explaining 50% of the additive genetic variance). A QTL explaining 5% of the additive genetic variance allowed a 35% reduction in the number of progeny tested bulls, while maintaining genetic response at the level of the base scheme. Genetic progress was up to 31.3% higher for schemes with increased embryo production and selection of embryos based on QTL information. The challenge for breeding organizations is to find the optimum breeding program with regard to additional genetic progress and additional (or reduced) cost.

  3. Variance reduction for Fokker–Planck based particle Monte Carlo schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick

    Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less

  4. Modified PTS-based PAPR Reduction for FBMC-OQAM Systems

    NASA Astrophysics Data System (ADS)

    Deng, Honggui; Ren, Shuang; Liu, Yan; Tang, Chengying

    2017-10-01

    The filter bank multicarrier with offset quadrature amplitude modulation (FBMC-OQAM) has been raised great concern in the 5G communication research. However FBMC-OQAM has also the inherent drawback of high peak-to-average power ratio (PAPR) that should be addressed. Due to the overlapping structure of FBMC-OQAM signals, it is proven that directly employing conventional partial transmit sequence (PTS) scheme proposed for OFDM to FBMC-OQAM is ineffective. In this paper, we propose a modified PTS-based scheme by employing phase rotation factors to optimize only the phase of the sparse peak signals, called as sparse PTS (S-PTS) scheme. Theoretical analysis and simulation results show that the proposed S-PTS scheme provides a significant PAPR reduction performance with lower computational complexity.

  5. Chaos-based CAZAC scheme for secure transmission in OFDM-PON

    NASA Astrophysics Data System (ADS)

    Fu, Xiaosong; Bi, Meihua; Zhou, Xuefang; Yang, Guowei; Lu, Yang; Hu, Miao

    2018-01-01

    To effectively resist malicious eavesdropping and performance deterioration, a novel chaos-based secure transmission scheme is proposed to enhance the physical layer security and reduce peak-to-average power ratio (PAPR) in orthogonal frequency division multiplexing passive optical network (OFDM-PON). By the randomly extracting operation of common CAZAC values, the specially-designed constant amplitude zero autocorrelation (CAZAC) is created for system encryption and PAPR reduction enhancing the transmission security. This method is verified in {10-Gb/s encrypted OFDM-PON with 20-km fiber transmission. Results show that, compared to common OFDM-PON, our scheme achieves {3-dB PAPR reduction and {1-dB receiver sensitivity improvement.

  6. Error reduction program: A progress report

    NASA Technical Reports Server (NTRS)

    Syed, S. A.

    1984-01-01

    Five finite differences schemes were evaluated for minimum numerical diffusion in an effort to identify and incorporate the best error reduction scheme into a 3D combustor performance code. Based on this evaluated, two finite volume method schemes were selected for further study. Both the quadratic upstream differencing scheme (QUDS) and the bounded skew upstream differencing scheme two (BSUDS2) were coded into a two dimensional computer code and their accuracy and stability determined by running several test cases. It was found that BSUDS2 was more stable than QUDS. It was also found that the accuracy of both schemes is dependent on the angle that the streamline make with the mesh with QUDS being more accurate at smaller angles and BSUDS2 more accurate at larger angles. The BSUDS2 scheme was selected for extension into three dimensions.

  7. An efficient and provable secure revocable identity-based encryption scheme.

    PubMed

    Wang, Changji; Li, Yuan; Xia, Xiaonan; Zheng, Kangjia

    2014-01-01

    Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE) has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.

  8. Assessment of the reduction methods used to develop chemical schemes: building of a new chemical scheme for VOC oxidation suited to three-dimensional multiscale HOx-NOx-VOC chemistry simulations

    NASA Astrophysics Data System (ADS)

    Szopa, S.; Aumont, B.; Madronich, S.

    2005-09-01

    The objective of this work was to develop and assess an automatic procedure to generate reduced chemical schemes for the atmospheric photooxidation of volatile organic carbon (VOC) compounds. The procedure is based on (i) the development of a tool for writing the fully explicit schemes for VOC oxidation (see companion paper Aumont et al., 2005), (ii) the application of several commonly used reduction methods to the fully explicit scheme, and (iii) the assessment of resulting errors based on direct comparison between the reduced and full schemes.

    The reference scheme included seventy emitted VOCs chosen to be representative of both anthropogenic and biogenic emissions, and their atmospheric degradation chemistry required more than two million reactions among 350000 species. Three methods were applied to reduce the size of the reference chemical scheme: (i) use of operators, based on the redundancy of the reaction sequences involved in the VOC oxidation, (ii) grouping of primary species having similar reactivities into surrogate species and (iii) grouping of some secondary products into surrogate species. The number of species in the final reduced scheme is 147, this being small enough for practical inclusion in current three-dimensional models. Comparisons between the fully explicit and reduced schemes, carried out with a box model for several typical tropospheric conditions, showed that the reduced chemical scheme accurately predicts ozone concentrations and some other aspects of oxidant chemistry for both polluted and clean tropospheric conditions.

  9. Evaluation of information-theoretic similarity measures for content-based retrieval and detection of masses in mammograms.

    PubMed

    Tourassi, Georgia D; Harrawood, Brian; Singh, Swatee; Lo, Joseph Y; Floyd, Carey E

    2007-01-01

    The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrieval precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.

  10. Evaluation of information-theoretic similarity measures for content-based retrieval and detection of masses in mammograms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee

    The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrievalmore » precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.« less

  11. PAPR reduction based on tone reservation scheme for DCO-OFDM indoor visible light communications.

    PubMed

    Bai, Jurong; Li, Yong; Yi, Yang; Cheng, Wei; Du, Huimin

    2017-10-02

    High peak-to-average power ratio (PAPR) leads to out-of-band power and in-band distortion in the direct current-biased optical orthogonal frequency division multiplexing (DCO-OFDM) systems. In order to effectively reduce the PAPR with faster convergence and lower complexity, this paper proposes a tone reservation based scheme, which is the combination of the signal-to-clipping noise ratio (SCR) procedure and the least squares approximation (LSA) procedure. In the proposed scheme, the transmitter of the DCO-OFDM indoor visible light communication (VLC) system is designed to transform the PAPR reduced signal into real-valued positive OFDM signal without doubling the transmission bandwidth. Moreover, the communication distance and the light emitting diode (LED) irradiance angle are taking into consideration in the evaluation of the system bit error rate (BER). The PAPR reduction efficiency of the proposed scheme is remarkable for DCO-OFDM indoor VLC systems.

  12. Provably secure identity-based identification and signature schemes from code assumptions

    PubMed Central

    Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940

  13. Provably secure identity-based identification and signature schemes from code assumptions.

    PubMed

    Song, Bo; Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.

  14. Assessing the impact of a cattle risk-based trading scheme on the movement of bovine tuberculosis infected animals in England and Wales.

    PubMed

    Adkin, A; Brouwer, A; Downs, S H; Kelly, L

    2016-01-01

    The adoption of bovine tuberculosis (bTB) risk-based trading (RBT) schemes has the potential to reduce the risk of bTB spread. However, any scheme will have cost implications that need to be balanced against its likely success in reducing bTB. This paper describes the first stochastic quantitative model assessing the impact of the implementation of a cattle risk-based trading scheme to inform policy makers and contribute to cost-benefit analyses. A risk assessment for England and Wales was developed to estimate the number of infected cattle traded using historic movement data recorded between July 2010 and June 2011. Three scenarios were implemented: cattle traded with no RBT scheme in place, voluntary provision of the score and a compulsory, statutory scheme applying a bTB risk score to each farm. For each scenario, changes in trade were estimated due to provision of the risk score to potential purchasers. An estimated mean of 3981 bTB infected animals were sold to purchasers with no RBT scheme in place in one year, with 90% confidence the true value was between 2775 and 5288. This result is dependent on the estimated between herd prevalence used in the risk assessment which is uncertain. With the voluntary provision of the risk score by farmers, on average, 17% of movements was affected (purchaser did not wish to buy once the risk score was available), with a reduction of 23% in infected animals being purchased initially. The compulsory provision of the risk score in a statutory scheme resulted in an estimated mean change to 26% of movements, with a reduction of 37% in infected animals being purchased initially, increasing to a 53% reduction in infected movements from higher risk sellers (score 4 and 5). The estimated mean reduction in infected animals being purchased could be improved to 45% given a 10% reduction in risky purchase behaviour by farmers which may be achieved through education programmes, or to an estimated mean of 49% if a rule was implemented preventing farmers from the purchase of animals of higher risk than their own herd. Given voluntary trials currently taking place of a trading scheme, recommendations for future work include the monitoring of initial uptake and changes in the purchase patterns of farmers. Such data could be used to update the risk assessment to reduce uncertainty associated with model estimates. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  15. Energy and Quality-Aware Multimedia Signal Processing

    NASA Astrophysics Data System (ADS)

    Emre, Yunus

    Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.

  16. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    PubMed

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A Hierarchical Z-Scheme α-Fe2 O3 /g-C3 N4 Hybrid for Enhanced Photocatalytic CO2 Reduction.

    PubMed

    Jiang, Zhifeng; Wan, Weiming; Li, Huaming; Yuan, Shouqi; Zhao, Huijun; Wong, Po Keung

    2018-03-01

    The challenge in the artificial photosynthesis of fossil resources from CO 2 by utilizing solar energy is to achieve stable photocatalysts with effective CO 2 adsorption capacity and high charge-separation efficiency. A hierarchical direct Z-scheme system consisting of urchin-like hematite and carbon nitride provides an enhanced photocatalytic activity of reduction of CO 2 to CO, yielding a CO evolution rate of 27.2 µmol g -1 h -1 without cocatalyst and sacrifice reagent, which is >2.2 times higher than that produced by g-C 3 N 4 alone (10.3 µmol g -1 h -1 ). The enhanced photocatalytic activity of the Z-scheme hybrid material can be ascribed to its unique characteristics to accelerate the reduction process, including: (i) 3D hierarchical structure of urchin-like hematite and preferable basic sites which promotes the CO 2 adsorption, and (ii) the unique Z-scheme feature efficiently promotes the separation of the electron-hole pairs and enhances the reducibility of electrons in the conduction band of the g-C 3 N 4 . The origin of such an obvious advantage of the hierarchical Z-scheme is not only explained based on the experimental data but also investigated by modeling CO 2 adsorption and CO adsorption on the three different atomic-scale surfaces via density functional theory calculation. The study creates new opportunities for hierarchical hematite and other metal-oxide-based Z-scheme system for solar fuel generation. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Computerized mass detection in whole breast ultrasound images: reduction of false positives using bilateral subtraction technique

    NASA Astrophysics Data System (ADS)

    Ikedo, Yuji; Fukuoka, Daisuke; Hara, Takeshi; Fujita, Hiroshi; Takada, Etsuo; Endo, Tokiko; Morita, Takako

    2007-03-01

    The comparison of left and right mammograms is a common technique used by radiologists for the detection and diagnosis of masses. In mammography, computer-aided detection (CAD) schemes using bilateral subtraction technique have been reported. However, in breast ultrasonography, there are no reports on CAD schemes using comparison of left and right breasts. In this study, we propose a scheme of false positive reduction based on bilateral subtraction technique in whole breast ultrasound images. Mass candidate regions are detected by using the information of edge directions. Bilateral breast images are registered with reference to the nipple positions and skin lines. A false positive region is detected based on a comparison of the average gray values of a mass candidate region and a region with the same position and same size as the candidate region in the contralateral breast. In evaluating the effectiveness of the false positive reduction method, three normal and three abnormal bilateral pairs of whole breast images were employed. These abnormal breasts included six masses larger than 5 mm in diameter. The sensitivity was 83% (5/6) with 13.8 (165/12) false positives per breast before applying the proposed reduction method. By applying the method, false positives were reduced to 4.5 (54/12) per breast without removing a true positive region. This preliminary study indicates that the bilateral subtraction technique is effective for improving the performance of a CAD scheme in whole breast ultrasound images.

  19. Bias correction of daily satellite precipitation data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.

    2018-05-01

    Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.

  20. Reducing the PAPR in FBMC-OQAM systems with low-latency trellis-based SLM technique

    NASA Astrophysics Data System (ADS)

    Bulusu, S. S. Krishna Chaitanya; Shaiek, Hmaied; Roviras, Daniel

    2016-12-01

    Filter-bank multi-carrier (FBMC) modulations, and more specifically FBMC-offset quadrature amplitude modulation (OQAM), are seen as an interesting alternative to orthogonal frequency division multiplexing (OFDM) for the 5th generation radio access technology. In this paper, we investigate the problem of peak-to-average power ratio (PAPR) reduction for FBMC-OQAM signals. Recently, it has been shown that FBMC-OQAM with trellis-based selected mapping (TSLM) scheme not only is superior to any scheme based on symbol-by-symbol approach but also outperforms that of the OFDM with classical SLM scheme. This paper is an extension of that work, where we analyze the TSLM in terms of computational complexity, required hardware memory, and latency issues. We have proposed an improvement to the TSLM, which requires very less hardware memory, compared to the originally proposed TSLM, and also have low latency. Additionally, the impact of the time duration of partial PAPR on the performance of TSLM is studied, and its lower bound has been identified by proposing a suitable time duration. Also, a thorough and fair comparison of performance has been done with an existing trellis-based scheme proposed in literature. The simulation results show that the proposed low-latency TSLM yields better PAPR reduction performance with relatively less hardware memory requirements.

  1. Incentive Analysis for Clean Water Act Reauthorization: Point Source/Nonpoint Source Trading for Nutrient Discharge Reductions (1992)

    EPA Pesticide Factsheets

    Paper focuses on trading schemes in which regulated point sources are allowed to avoid upgrading their pollution control technology to meet water quality-based effluent limits if they pay for equivalent (or greater) reductions in nonpoint source pollution.

  2. A modified non-binary LDPC scheme based on watermark symbols in high speed optical transmission systems

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo

    2016-04-01

    We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.

  3. A Cu-Zn nanoparticle promoter for selective carbon dioxide reduction and its application in visible-light-active Z-scheme systems using water as an electron donor.

    PubMed

    Yin, Ge; Sako, Hiroshi; Gubbala, Ramesh V; Ueda, Shigenori; Yamaguchi, Akira; Abe, Hideki; Miyauchi, Masahiro

    2018-04-17

    Selective carbon dioxide photoreduction to produce formic acid was achieved under visible light irradiation using water molecules as electron donors, similar to natural plants, based on the construction of a Z-scheme light harvesting system modified with a Cu-Zn alloy nanoparticle co-catalyst. The faradaic efficiency of our Z-scheme system for HCOOH generation was over 50% under visible light irradiation.

  4. Indirect Z-Scheme BiOI/g-C3N4 Photocatalysts with Enhanced Photoreduction CO2 Activity under Visible Light Irradiation.

    PubMed

    Wang, Ji-Chao; Yao, Hong-Chang; Fan, Ze-Yu; Zhang, Lin; Wang, Jian-She; Zang, Shuang-Quan; Li, Zhong-Jun

    2016-02-17

    Rational design and construction of Z-scheme photocatalysts has received much attention in the field of CO2 reduction because of its great potential to solve the current energy and environmental crises. In this study, a series of Z-scheme BiOI/g-C3N4 photocatalysts are synthesized and their photocatalytic performance for CO2 reduction to produce CO, H2 and/or CH4 is evaluated under visible light irradiation (λ > 400 nm). The results show that the as-synthesized composites exhibit more highly efficient photocatalytic activity than pure g-C3N4 and BiOI and that the product yields change remarkably depending on the reaction conditions such as irradiation light wavelength. Emphasis is placed on identifying how the charge transfers across the heterojunctions and an indirect Z-scheme charge transfer mechanism is verified by detecting the intermediate I3(-) ions. The reaction mechanism is further proposed based on the detection of the intermediate (•)OH and H2O2. This work may be useful for rationally designing of new types of Z-scheme photocatalyst and provide some illuminating insights into the Z-scheme transfer mechanism.

  5. Two-stage atlas subset selection in multi-atlas based image segmentation.

    PubMed

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.

  6. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    PubMed

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  7. Efficient Network Coding-Based Loss Recovery for Reliable Multicast in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Chi, Kaikai; Jiang, Xiaohong; Ye, Baoliu; Horiguchi, Susumu

    Recently, network coding has been applied to the loss recovery of reliable multicast in wireless networks [19], where multiple lost packets are XOR-ed together as one packet and forwarded via single retransmission, resulting in a significant reduction of bandwidth consumption. In this paper, we first prove that maximizing the number of lost packets for XOR-ing, which is the key part of the available network coding-based reliable multicast schemes, is actually a complex NP-complete problem. To address this limitation, we then propose an efficient heuristic algorithm for finding an approximately optimal solution of this optimization problem. Furthermore, we show that the packet coding principle of maximizing the number of lost packets for XOR-ing sometimes cannot fully exploit the potential coding opportunities, and we then further propose new heuristic-based schemes with a new coding principle. Simulation results demonstrate that the heuristic-based schemes have very low computational complexity and can achieve almost the same transmission efficiency as the current coding-based high-complexity schemes. Furthermore, the heuristic-based schemes with the new coding principle not only have very low complexity, but also slightly outperform the current high-complexity ones.

  8. On the number of entangled qubits in quantum wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Mohapatra, Amit Kumar; Balakrishnan, S.

    2016-08-01

    Wireless sensor networks (WSNs) can take the advantages by utilizing the security schemes based on the concepts of quantum computation and cryptography. However, quantum wireless sensor networks (QWSNs) are shown to have many practical constraints. One of the constraints is the number of entangled qubits which is very high in the quantum security scheme proposed by [Nagy et al., Nat. Comput. 9 (2010) 819]. In this work, we propose a modification of the security scheme introduced by Nagy et al. and hence the reduction in the number of entangled qubits is shown. Further, the modified scheme can overcome some of the constraints in the QWSNs.

  9. A Comparative Study of Mindfulness Efficiency Based on Islamic-Spiritual Schemes and Group Cognitive Behavioral Therapy on Reduction of Anxiety and Depression in Pregnant Women.

    PubMed

    Aslami, Elahe; Alipour, Ahmad; Najib, Fatemeh Sadat; Aghayosefi, Alireza

    2017-04-01

    Anxiety and depression during the pregnancy period are among the factors affecting the pregnancy undesirable outcomes and delivery. One way of controlling anxiety and depression is mindfulness and cognitive behavioral therapy. The purpose of this study was to compare the efficiency of mindfulness based on the Islamic-spiritual schemas and group cognitive behavioral therapy on reduction of anxiety and depression in pregnant women. The research design was semi-experimental in the form of pretest-posttest using a control group. Among the pregnant women in the 16th to 32nd weeks of pregnancy who referred to the health center, 30 pregnant women with high anxiety level and 30 pregnant women with high depression participated in the research. Randomly 15 participants with high depression and 15 participants with high anxiety were considered in the intervention group under the treatment of mindfulness based on Islamic-spiritual schemes. In addition, 15 participants with high scores regarding depression and 15 with high scores in anxiety were considered in the other group. .The control group consisted of 15 pregnant women with high anxiety and depression. Beck anxiety-depression questionnaire was used in two steps of pre-test and post-test. Data were analyzed using SPSS, version 20, and P≤0.05 was considered as significant. The results of multivariate analysis of variance test and tracking Tokey test showed that there was a significant difference between the mean scores of anxiety and depression in the two groups of mindfulness based on spiritual- Islamic scheme (P<0.001) and the group of cognitive behavioral therapy with each other (P<0.001) and with the control group(P<0.001). The mean of anxiety and depression scores decreased in the intervention group, but it increased in the control group. Both therapy methods were effective in reduction of anxiety and depression of pregnant women, but the effect of mindfulness based on spiritual- Islamic schemes was more.

  10. Design and simulation of Macro-Fiber composite based serrated microflap for wind turbine blade fatigue load reduction

    NASA Astrophysics Data System (ADS)

    Sun, Xiao; Dai, Qingli; Bilgen, Onur

    2018-05-01

    A Macro-Fiber Composite (MFC) based active serrated microflap is designed in this research for wind turbine blades. Its fatigue load reduction potential is evaluated in normal operating conditions. The force and displacement output of the MFC-based actuator are simulated using a bimorph beam model. The work done by the aerodynamic, centripetal and gravitational forces acting on the microflap were calculated to determine the required capacity of the MFC-based actuator. MFC-based actuators with a lever mechanical linkage are designed to achieve the required force and displacement to activate the microflap. A feedback control scheme is designed to control the microflap during operation. Through an aerodynamic-aeroelastic time marching simulation with the designed control scheme, the time responses of the wind turbine blades are obtained. The fatigue analysis shows that the serrated microflap can reduce the standard deviation of the blade root flapwise bending moment and the fatigue damage equivalent loads.

  11. A fully-automated multiscale kernel graph cuts based particle localization scheme for temporal focusing two-photon microscopy

    NASA Astrophysics Data System (ADS)

    Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei

    2017-03-01

    The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.

  12. An improved input shaping design for an efficient sway control of a nonlinear 3D overhead crane with friction

    NASA Astrophysics Data System (ADS)

    Maghsoudi, Mohammad Javad; Mohamed, Z.; Sudin, S.; Buyamin, S.; Jaafar, H. I.; Ahmad, S. M.

    2017-08-01

    This paper proposes an improved input shaping scheme for an efficient sway control of a nonlinear three dimensional (3D) overhead crane with friction using the particle swarm optimization (PSO) algorithm. Using this approach, a higher payload sway reduction is obtained as the input shaper is designed based on a complete nonlinear model, as compared to the analytical-based input shaping scheme derived using a linear second order model. Zero Vibration (ZV) and Distributed Zero Vibration (DZV) shapers are designed using both analytical and PSO approaches for sway control of rail and trolley movements. To test the effectiveness of the proposed approach, MATLAB simulations and experiments on a laboratory 3D overhead crane are performed under various conditions involving different cable lengths and sway frequencies. Their performances are studied based on a maximum residual of payload sway and Integrated Absolute Error (IAE) values which indicate total payload sway of the crane. With experiments, the superiority of the proposed approach over the analytical-based is shown by 30-50% reductions of the IAE values for rail and trolley movements, for both ZV and DZV shapers. In addition, simulations results show higher sway reductions with the proposed approach. It is revealed that the proposed PSO-based input shaping design provides higher payload sway reductions of a 3D overhead crane with friction as compared to the commonly designed input shapers.

  13. Variable length adjacent partitioning for PTS based PAPR reduction of OFDM signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibraheem, Zeyid T.; Rahman, Md. Mijanur; Yaakob, S. N.

    2015-05-15

    Peak-to-Average power ratio (PAPR) is a major drawback in OFDM communication. It leads the power amplifier into nonlinear region operation resulting into loss of data integrity. As such, there is a strong motivation to find techniques to reduce PAPR. Partial Transmit Sequence (PTS) is an attractive scheme for this purpose. Judicious partitioning the OFDM data frame into disjoint subsets is a pivotal component of any PTS scheme. Out of the existing partitioning techniques, adjacent partitioning is characterized by an attractive trade-off between cost and performance. With an aim of determining effects of length variability of adjacent partitions, we performed anmore » investigation into the performances of a variable length adjacent partitioning (VL-AP) and fixed length adjacent partitioning in comparison with other partitioning schemes such as pseudorandom partitioning. Simulation results with different modulation and partitioning scenarios showed that fixed length adjacent partition had better performance compared to variable length adjacent partitioning. As expected, simulation results showed a slightly better performance of pseudorandom partitioning technique compared to fixed and variable adjacent partitioning schemes. However, as the pseudorandom technique incurs high computational complexities, adjacent partitioning schemes were still seen as favorable candidates for PAPR reduction.« less

  14. Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.

    PubMed

    Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann

    2017-01-01

    Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.

  15. A Nonmetal Plasmonic Z-Scheme Photocatalyst with UV- to NIR-Driven Photocatalytic Protons Reduction.

    PubMed

    Zhang, Zhenyi; Huang, Jindou; Fang, Yurui; Zhang, Mingyi; Liu, Kuichao; Dong, Bin

    2017-05-01

    Ultrabroad-spectrum absorption and highly efficient generation of available charge carriers are two essential requirements for promising semiconductor-based photocatalysts, towards achieving the ultimate goal of solar-to-fuel conversion. Here, a fascinating nonmetal plasmonic Z-scheme photocatalyst with the W 18 O 49 /g-C 3 N 4 heterostructure is reported, which can effectively harvest photon energies spanning from the UV to the nearinfrared region and simultaneously possesses improved charge-carrier dynamics to boost the generation of long-lived active electrons for the photocatalytic reduction of protons into H 2 . By combining with theoretical simulations, a unique synergistic photocatalysis effect between the semiconductive Z-scheme charge-carrier separation and metal-like localized-surface-plasmon-resonance-induced "hot electrons" injection process is demonstrated within this binary heterostructure. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Two-stage atlas subset selection in multi-atlas based image segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stagemore » atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.« less

  17. Processing lunar soils for oxygen and other materials

    NASA Technical Reports Server (NTRS)

    Knudsen, Christian W.; Gibson, Michael A.

    1992-01-01

    Two types of lunar materials are excellent candidates for lunar oxygen production: ilmenite and silicates such as anorthite. Both are lunar surface minable, occurring in soils, breccias, and basalts. Because silicates are considerably more abundant than ilmenite, they may be preferred as source materials. Depending on the processing method chosen for oxygen production and the feedstock material, various useful metals and bulk materials can be produced as byproducts. Available processing techniques include hydrogen reduction of ilmenite and electrochemical and chemical reductions of silicates. Processes in these categories are generally in preliminary development stages and need significant research and development support to carry them to practical deployment, particularly as a lunar-based operation. The goal of beginning lunar processing operations by 2010 requires that planning and research and development emphasize the simplest processing schemes. However, more complex schemes that now appear to present difficult technical challenges may offer more valuable metal byproducts later. While they require more time and effort to perfect, the more complex or difficult schemes may provide important processing and product improvements with which to extend and elaborate the initial lunar processing facilities. A balanced R&D program should take this into account. The following topics are discussed: (1) ilmenite--semi-continuous process; (2) ilmenite--continuous fluid-bed reduction; (3) utilization of spent ilmenite to produce bulk materials; (4) silicates--electrochemical reduction; and (5) silicates--chemical reduction.

  18. Considerations and techniques for incorporating remotely sensed imagery into the land resource management process.

    NASA Technical Reports Server (NTRS)

    Brooner, W. G.; Nichols, D. A.

    1972-01-01

    Development of a scheme for utilizing remote sensing technology in an operational program for regional land use planning and land resource management program applications. The scheme utilizes remote sensing imagery as one of several potential inputs to derive desired and necessary data, and considers several alternative approaches to the expansion and/or reduction and analysis of data, using automated data handling techniques. Within this scheme is a five-stage program development which includes: (1) preliminary coordination, (2) interpretation and encoding, (3) creation of data base files, (4) data analysis and generation of desired products, and (5) applications.

  19. Self-assembled hierarchical direct Z-scheme g-C3N4/ZnO microspheres with enhanced photocatalytic CO2 reduction performance

    NASA Astrophysics Data System (ADS)

    Nie, Ning; Zhang, Liuyang; Fu, Junwei; Cheng, Bei; Yu, Jiaguo

    2018-05-01

    Photocatalytic reduction of CO2 into hydrocarbon fuels has been regarded as a promising approach to ease the greenhouse effect and the energy shortage. Herein, an electrostatic self-assembly method was exploited to prepare g-C3N4/ZnO composite microsphere. This method simply utilized the opposite surface charge of each component, achieving a hierarchical structure with intimate contact between them. A much improved photocatalytic CO2 reduction activity was attained. The CH3OH production rate was 1.32 μmol h-1 g-1, which was 2.1 and 4.1 times more than that of the pristine ZnO and g-C3N4, respectively. This facile design bestowed the g-C3N4/ZnO composite an extended light adsorption caused by multi-light scattering effect. It also guaranteed the uniform distribution of g-C3N4 nanosheets on the surface of ZnO microspheres, maximizing their advantage and synergistic effect. Most importantly, the preeminent performance was proposed and validated based on the direct Z-scheme. The recombination rate was considerably suppressed. This work features the meliority of constructing hierarchical direct Z-scheme structures in photocatalytic CO2 reduction reactions.

  20. Production of oxygen from lunar ilmenite

    NASA Technical Reports Server (NTRS)

    Zhao, Y.; Shadman, F.

    1990-01-01

    The following subjects are addressed: (1) the mechanism and kinetics of carbothermal reduction of simulated lunar ilmenite using carbon and, particularly, CO as reducing agents; (2) the determination of the rate-limiting steps; (3) the investigation of the effect of impurities, particularly magnesium; (4) the search for catalysts suitable for enhancement of the rate-limiting step; (5) the comparison of the kinetics of carbothermal reduction with those of hydrogen reduction; (6) the study of the combined use of CO and hydrogen as products of gasification of carbonaceous solids; (7) the development of reduction methods based on the use of waste carbonaceous compounds for the process; (8) the development of a carbothermal reaction path that utilizes gasification of carbonaceous solids to reducing gaseous species (hydrocarbons and carbon monoxide) to facilitate the reduction reaction kinetics and make the process more flexible in using various forms of carbonaceous feeds; (9) the development of advanced gas separation techniques, including the use of high-temperature ceramic membranes; (10) the development of an optimum process flow sheet for carbothermal reduction, and comparison of this process with the hydrogen reduction scheme, as well as a general comparison with other leading oxygen production schemes; and (11) the use of new and advanced material processing and separation techniques.

  1. Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability

    NASA Astrophysics Data System (ADS)

    Guruvareddiar, Palanivel; Joseph, Biju K.

    2014-03-01

    Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.

  2. A decentralized linear quadratic control design method for flexible structures

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Craig, Roy R., Jr.

    1990-01-01

    A decentralized suboptimal linear quadratic control design procedure which combines substructural synthesis, model reduction, decentralized control design, subcontroller synthesis, and controller reduction is proposed for the design of reduced-order controllers for flexible structures. The procedure starts with a definition of the continuum structure to be controlled. An evaluation model of finite dimension is obtained by the finite element method. Then, the finite element model is decomposed into several substructures by using a natural decomposition called substructuring decomposition. Each substructure, at this point, still has too large a dimension and must be reduced to a size that is Riccati-solvable. Model reduction of each substructure can be performed by using any existing model reduction method, e.g., modal truncation, balanced reduction, Krylov model reduction, or mixed-mode method. Then, based on the reduced substructure model, a subcontroller is designed by an LQ optimal control method for each substructure independently. After all subcontrollers are designed, a controller synthesis method called substructural controller synthesis is employed to synthesize all subcontrollers into a global controller. The assembling scheme used is the same as that employed for the structure matrices. Finally, a controller reduction scheme, called the equivalent impulse response energy controller (EIREC) reduction algorithm, is used to reduce the global controller to a reasonable size for implementation. The EIREC reduced controller preserves the impulse response energy of the full-order controller and has the property of matching low-frequency moments and low-frequency power moments. An advantage of the substructural controller synthesis method is that it relieves the computational burden associated with dimensionality. Besides that, the SCS design scheme is also a highly adaptable controller synthesis method for structures with varying configuration, or varying mass and stiffness properties.

  3. Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, S.

    2002-07-01

    As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent ofmore » this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)« less

  4. Orthogonal-state-based cryptography in quantum mechanics and local post-quantum theories

    NASA Astrophysics Data System (ADS)

    Aravinda, S.; Banerjee, Anindita; Pathak, Anirban; Srikanth, R.

    2014-02-01

    We introduce the concept of cryptographic reduction, in analogy with a similar concept in computational complexity theory. In this framework, class A of crypto-protocols reduces to protocol class B in a scenario X, if for every instance a of A, there is an instance b of B and a secure transformation X that reproduces a given b, such that the security of b guarantees the security of a. Here we employ this reductive framework to study the relationship between security in quantum key distribution (QKD) and quantum secure direct communication (QSDC). We show that replacing the streaming of independent qubits in a QKD scheme by block encoding and transmission (permuting the order of particles block by block) of qubits, we can construct a QSDC scheme. This forms the basis for the block reduction from a QSDC class of protocols to a QKD class of protocols, whereby if the latter is secure, then so is the former. Conversely, given a secure QSDC protocol, we can of course construct a secure QKD scheme by transmitting a random key as the direct message. Then the QKD class of protocols is secure, assuming the security of the QSDC class which it is built from. We refer to this method of deduction of security for this class of QKD protocols, as key reduction. Finally, we propose an orthogonal-state-based deterministic key distribution (KD) protocol which is secure in some local post-quantum theories. Its security arises neither from geographic splitting of a code state nor from Heisenberg uncertainty, but from post-measurement disturbance.

  5. Drag reduction in channel flow using nonlinear control

    NASA Technical Reports Server (NTRS)

    Keefe, Laurence R.

    1993-01-01

    Two nonlinear control schemes have been applied to the problem of drag reduction in channel flow. Both schemes have been tested using numerical simulations at a mass flux Reynolds numbers of 4408, utilizing 2D nonlinear neutral modes for goal dynamics. The OGY-method, which requires feedback, reduces drag to 60-80 percent of the turbulent value at the same Reynolds number, and employs forcing only within a thin region near the wall. The H-method, or model-based control, fails to achieve any drag reduction when starting from a fully turbulent initial condition, but shows potential for suppressing or retarding laminar-to-turbulent transition by imposing instead a transition to a low drag, nonlinear traveling wave solution to the Navier-Stokes equation. The drag in this state corresponds to that achieved by the OGY-method. Model-based control requires no feedback, but in experiments to date has required the forcing be imposed within a thicker layer than the OGY-method. Control energy expenditures in both methods are small, representing less than 0.1 percent of the uncontrolled flow's energy.

  6. Multistage variable probability forest volume inventory. [the Defiance Unit of the Navajo Nation

    NASA Technical Reports Server (NTRS)

    Anderson, J. E. (Principal Investigator)

    1979-01-01

    An inventory scheme based on the use of computer processed LANDSAT MSS data was developed. Output from the inventory scheme provides an estimate of the standing net saw timber volume of a major timber species on a selected forested area of the Navajo Nation. Such estimates are based on the values of parameters currently used for scaled sawlog conversion to mill output. The multistage variable probability sampling appears capable of producing estimates which compare favorably with those produced using conventional techniques. In addition, the reduction in time, manpower, and overall costs lend it to numerous applications.

  7. The controlled growth method - A tool for structural optimization

    NASA Technical Reports Server (NTRS)

    Hajela, P.; Sobieszczanski-Sobieski, J.

    1981-01-01

    An adaptive design variable linking scheme in a NLP based optimization algorithm is proposed and evaluated for feasibility of application. The present scheme, based on an intuitive effectiveness measure for each variable, differs from existing methodology in that a single dominant variable controls the growth of all others in a prescribed optimization cycle. The proposed method is implemented for truss assemblies and a wing box structure for stress, displacement and frequency constraints. Substantial reduction in computational time, even more so for structures under multiple load conditions, coupled with a minimal accompanying loss in accuracy, vindicates the algorithm.

  8. Coupling of Helmholtz resonators to improve acoustic liners for turbofan engines at low frequency

    NASA Technical Reports Server (NTRS)

    Dean, L. W.

    1975-01-01

    An analytical and test program was conducted to evaluate means for increasing the effectiveness of low frequency sound absorbing liners for aircraft turbine engines. Three schemes for coupling low frequency absorber elements were considered. These schemes were analytically modeled and their impedance was predicted over a frequency range of 50 to 1,000 Hz. An optimum and two off-optimum designs of the most promising, a parallel coupled scheme, were fabricated and tested in a flow duct facility. Impedance measurements were in good agreement with predicted values and validated the procedure used to transform modeled parameters to hardware designs. Measurements of attenuation for panels of coupled resonators were consistent with predictions based on measured impedance. All coupled resonator panels tested showed an increase in peak attenuation of about 50% and an increase in attenuation bandwidth of one one-third octave band over that measured for an uncoupled panel. These attenuation characteristics equate to about 35% greater reduction in source perceived noise level (PNL), relative to the uncoupled panel, or a reduction in treatment length of about 24% for constant PNL reduction. The increased effectiveness of the coupled resonator concept for attenuation of low frequency broad spectrum noise is demonstrated.

  9. A Comparative Study of Mindfulness Efficiency Based on Islamic-Spiritual Schemes and Group Cognitive Behavioral Therapy on Reduction of Anxiety and Depression in Pregnant Women

    PubMed Central

    Aslami, Elahe; Alipour, Ahmad; Najib, Fatemeh Sadat; Aghayosefi, Alireza

    2017-01-01

    ABSTRACT Background: Anxiety and depression during the pregnancy period are among the factors affecting the pregnancy undesirable outcomes and delivery. One way of controlling anxiety and depression is mindfulness and cognitive behavioral therapy. The purpose of this study was to compare the efficiency of mindfulness based on the Islamic-spiritual schemas and group cognitive behavioral therapy on reduction of anxiety and depression in pregnant women. Methods: The research design was semi-experimental in the form of pretest-posttest using a control group. Among the pregnant women in the 16th to 32nd weeks of pregnancy who referred to the health center, 30 pregnant women with high anxiety level and 30 pregnant women with high depression participated in the research. Randomly 15 participants with high depression and 15 participants with high anxiety were considered in the intervention group under the treatment of mindfulness based on Islamic-spiritual schemes. In addition, 15 participants with high scores regarding depression and 15 with high scores in anxiety were considered in the other group. .The control group consisted of 15 pregnant women with high anxiety and depression. Beck anxiety-depression questionnaire was used in two steps of pre-test and post-test. Data were analyzed using SPSS, version 20, and P≤0.05 was considered as significant. Results: The results of multivariate analysis of variance test and tracking Tokey test showed that there was a significant difference between the mean scores of anxiety and depression in the two groups of mindfulness based on spiritual- Islamic scheme (P<0.001) and the group of cognitive behavioral therapy with each other (P<0.001) and with the control group(P<0.001). The mean of anxiety and depression scores decreased in the intervention group, but it increased in the control group. Conclusion: Both therapy methods were effective in reduction of anxiety and depression of pregnant women, but the effect of mindfulness based on spiritual- Islamic schemes was more. PMID:28409168

  10. A joint tracking method for NSCC based on WLS algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Ruidan; Xu, Ying; Yuan, Hong

    2017-12-01

    Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.

  11. Thermodynamic Analysis of Chemically Reacting Mixtures-Comparison of First and Second Order Models.

    PubMed

    Pekař, Miloslav

    2018-01-01

    Recently, a method based on non-equilibrium continuum thermodynamics which derives thermodynamically consistent reaction rate models together with thermodynamic constraints on their parameters was analyzed using a triangular reaction scheme. The scheme was kinetically of the first order. Here, the analysis is further developed for several first and second order schemes to gain a deeper insight into the thermodynamic consistency of rate equations and relationships between chemical thermodynamic and kinetics. It is shown that the thermodynamic constraints on the so-called proper rate coefficient are usually simple sign restrictions consistent with the supposed reaction directions. Constraints on the so-called coupling rate coefficients are more complex and weaker. This means more freedom in kinetic coupling between reaction steps in a scheme, i.e., in the kinetic effects of other reactions on the rate of some reaction in a reacting system. When compared with traditional mass-action rate equations, the method allows a reduction in the number of traditional rate constants to be evaluated from data, i.e., a reduction in the dimensionality of the parameter estimation problem. This is due to identifying relationships between mass-action rate constants (relationships which also include thermodynamic equilibrium constants) which have so far been unknown.

  12. A New Scrambling Evaluation Scheme Based on Spatial Distribution Entropy and Centroid Difference of Bit-Plane

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Adhikari, Avishek; Sakurai, Kouichi

    Watermarking is one of the most effective techniques for copyright protection and information hiding. It can be applied in many fields of our society. Nowadays, some image scrambling schemes are used as one part of the watermarking algorithm to enhance the security. Therefore, how to select an image scrambling scheme and what kind of the image scrambling scheme may be used for watermarking are the key problems. Evaluation method of the image scrambling schemes can be seen as a useful test tool for showing the property or flaw of the image scrambling method. In this paper, a new scrambling evaluation system based on spatial distribution entropy and centroid difference of bit-plane is presented to obtain the scrambling degree of image scrambling schemes. Our scheme is illustrated and justified through computer simulations. The experimental results show (in Figs. 6 and 7) that for the general gray-scale image, the evaluation degree of the corresponding cipher image for the first 4 significant bit-planes selection is nearly the same as that for the 8 bit-planes selection. That is why, instead of taking 8 bit-planes of a gray-scale image, it is sufficient to take only the first 4 significant bit-planes for the experiment to find the scrambling degree. This 50% reduction in the computational cost makes our scheme efficient.

  13. Physiotherapy movement based classification approaches to low back pain: comparison of subgroups through review and developer/expert survey.

    PubMed

    Karayannis, Nicholas V; Jull, Gwendolen A; Hodges, Paul W

    2012-02-20

    Several classification schemes, each with its own philosophy and categorizing method, subgroup low back pain (LBP) patients with the intent to guide treatment. Physiotherapy derived schemes usually have a movement impairment focus, but the extent to which other biological, psychological, and social factors of pain are encompassed requires exploration. Furthermore, within the prevailing 'biological' domain, the overlap of subgrouping strategies within the orthopaedic examination remains unexplored. The aim of this study was "to review and clarify through developer/expert survey, the theoretical basis and content of physical movement classification schemes, determine their relative reliability and similarities/differences, and to consider the extent of incorporation of the bio-psycho-social framework within the schemes". A database search for relevant articles related to LBP and subgrouping or classification was conducted. Five dominant movement-based schemes were identified: Mechanical Diagnosis and Treatment (MDT), Treatment Based Classification (TBC), Pathoanatomic Based Classification (PBC), Movement System Impairment Classification (MSI), and O'Sullivan Classification System (OCS) schemes. Data were extracted and a survey sent to the classification scheme developers/experts to clarify operational criteria, reliability, decision-making, and converging/diverging elements between schemes. Survey results were integrated into the review and approval obtained for accuracy. Considerable diversity exists between schemes in how movement informs subgrouping and in the consideration of broader neurosensory, cognitive, emotional, and behavioural dimensions of LBP. Despite differences in assessment philosophy, a common element lies in their objective to identify a movement pattern related to a pain reduction strategy. Two dominant movement paradigms emerge: (i) loading strategies (MDT, TBC, PBC) aimed at eliciting a phenomenon of centralisation of symptoms; and (ii) modified movement strategies (MSI, OCS) targeted towards documenting the movement impairments associated with the pain state. Schemes vary on: the extent to which loading strategies are pursued; the assessment of movement dysfunction; and advocated treatment approaches. A biomechanical assessment predominates in the majority of schemes (MDT, PBC, MSI), certain psychosocial aspects (fear-avoidance) are considered in the TBC scheme, certain neurophysiologic (central versus peripherally mediated pain states) and psychosocial (cognitive and behavioural) aspects are considered in the OCS scheme.

  14. Electroreduction-based electrochemical-enzymatic redox cycling for the detection of cancer antigen 15-3 using graphene oxide-modified indium-tin oxide electrodes.

    PubMed

    Park, Seonhwa; Singh, Amardeep; Kim, Sinyoung; Yang, Haesik

    2014-02-04

    We compare herein biosensing performance of two electroreduction-based electrochemical-enzymatic (EN) redox-cycling schemes [the redox cycling combined with simultaneous enzymatic amplification (one-enzyme scheme) and the redox cycling combined with preceding enzymatic amplification (two-enzyme scheme)]. To minimize unwanted side reactions in the two-enzyme scheme, β-galactosidase (Gal) and tyrosinase (Tyr) are selected as an enzyme label and a redox enzyme, respectively, and Tyr is selected as a redox enzyme label in the one-enzyme scheme. The signal amplification in the one-enzyme scheme consists of (i) enzymatic oxidation of catechol into o-benzoquinone by Tyr and (ii) electroreduction-based EN redox cycling of o-benzoquinone. The signal amplification in the two-enzyme scheme consists of (i) enzymatic conversion of phenyl β-d-galactopyranoside into phenol by Gal, (ii) enzymatic oxidation of phenol into catechol by Tyr, and (iii) electroreduction-based EN redox cycling of o-benzoquinone including further enzymatic oxidation of catechol to o-benzoquinone by Tyr. Graphene oxide-modified indium-tin oxide (GO/ITO) electrodes, simply prepared by immersing ITO electrodes in a GO-dispersed aqueous solution, are used to obtain better electrocatalytic activities toward o-benzoquinone reduction than bare ITO electrodes. The detection limits for mouse IgG, measured with GO/ITO electrodes, are lower than when measured with bare ITO electrodes. Importantly, the detection of mouse IgG using the two-enzyme scheme allows lower detection limits than that using the one-enzyme scheme, because the former gives higher signal levels at low target concentrations although the former gives lower signal levels at high concentrations. The detection limit for cancer antigen (CA) 15-3, a biomarker of breast cancer, measured using the two-enzyme scheme and GO/ITO electrodes is ca. 0.1 U/mL, indicating that the immunosensor is highly sensitive.

  15. Evaluation of model-based versus non-parametric monaural noise-reduction approaches for hearing aids.

    PubMed

    Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker

    2012-08-01

    Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.

  16. Continuous Adaptive Population Reduction (CAPR) for Differential Evolution Optimization.

    PubMed

    Wong, Ieong; Liu, Wenjia; Ho, Chih-Ming; Ding, Xianting

    2017-06-01

    Differential evolution (DE) has been applied extensively in drug combination optimization studies in the past decade. It allows for identification of desired drug combinations with minimal experimental effort. This article proposes an adaptive population-sizing method for the DE algorithm. Our new method presents improvements in terms of efficiency and convergence over the original DE algorithm and constant stepwise population reduction-based DE algorithm, which would lead to a reduced number of cells and animals required to identify an optimal drug combination. The method continuously adjusts the reduction of the population size in accordance with the stage of the optimization process. Our adaptive scheme limits the population reduction to occur only at the exploitation stage. We believe that continuously adjusting for a more effective population size during the evolutionary process is the major reason for the significant improvement in the convergence speed of the DE algorithm. The performance of the method is evaluated through a set of unimodal and multimodal benchmark functions. In combining with self-adaptive schemes for mutation and crossover constants, this adaptive population reduction method can help shed light on the future direction of a completely parameter tune-free self-adaptive DE algorithm.

  17. Farmers value on-farm ecosystem services as important, but what are the impediments to participation in PES schemes?

    PubMed

    Page, Girija; Bellotti, Bill

    2015-05-15

    Optimal participation in market-based instruments such as PES (payment for ecosystem services) schemes is a necessary precondition for achieving large scale cost-effective conservation goals from agricultural landscapes. However farmers' willingness to participate in voluntary conservation programmes is influenced by psychological, financial and social factors and these need to be assessed on a case-by-case basis. In this research farmers' values towards on-farm ecosystem services, motivations and perceived impediments to participation in conservation programmes are identified in two local land services regions in Australia using surveys. Results indicated that irrespective of demographics such as age, gender, years farmed, area owned and annual gross farm income, farmers valued ecosystem services important for future sustainability. Non-financial motivations had significant associations with farmer's perceptions regarding attitudes and values towards the environment and participation in conservation-related programmes. Farmer factors such as lack of awareness and unavailability of adequate information were correlated with non-participation in conservation-based programmes. In the current political context, government uncertainty regarding schemes especially around carbon sequestration and reduction was the most frequently cited impediment that could deter participation. Future research that explores willingness of farmers towards participation in various types of PES programmes developed around carbon reduction, water quality provision and biodiversity conservation, and, duration of the contract and payment levels that are attractive to the farmers will provide insights for developing farmer-friendly PES schemes in the region. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. A combination of spatial and recursive temporal filtering for noise reduction when using region of interest (ROI) fluoroscopy for patient dose reduction in image guided vascular interventions with significant anatomical motion

    NASA Astrophysics Data System (ADS)

    Setlur Nagesh, S. V.; Khobragade, P.; Ionita, C.; Bednarek, D. R.; Rudin, S.

    2015-03-01

    Because x-ray based image-guided vascular interventions are minimally invasive they are currently the most preferred method of treating disorders such as stroke, arterial stenosis, and aneurysms; however, the x-ray exposure to the patient during long image-guided interventional procedures could cause harmful effects such as cancer in the long run and even tissue damage in the short term. ROI fluoroscopy reduces patient dose by differentially attenuating the incident x-rays outside the region-of-interest. To reduce the noise in the dose-reduced regions previously recursive temporal filtering was successfully demonstrated for neurovascular interventions. However, in cardiac interventions, anatomical motion is significant and excessive recursive filtering could cause blur. In this work the effects of three noise-reduction schemes, including recursive temporal filtering, spatial mean filtering, and a combination of spatial and recursive temporal filtering, were investigated in a simulated ROI dose-reduced cardiac intervention. First a model to simulate the aortic arch and its movement was built. A coronary stent was used to simulate a bioprosthetic valve used in TAVR procedures and was deployed under dose-reduced ROI fluoroscopy during the simulated heart motion. The images were then retrospectively processed for noise reduction in the periphery, using recursive temporal filtering, spatial filtering and a combination of both. Quantitative metrics for all three noise reduction schemes are calculated and are presented as results. From these it can be concluded that with significant anatomical motion, a combination of spatial and recursive temporal filtering scheme is best suited for reducing the excess quantum noise in the periphery. This new noise-reduction technique in combination with ROI fluoroscopy has the potential for substantial patient-dose savings in cardiac interventions.

  19. Four-dimensional MRI using an internal respiratory surrogate derived by dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Uh, Jinsoo; Ayaz Khan, M.; Hua, Chiaho

    2016-11-01

    This study aimed to develop a practical and accurate 4-dimensional (4D) magnetic resonance imaging (MRI) method using a non-navigator, image-based internal respiratory surrogate derived by dimensionality reduction (DR). The use of DR has been previously suggested but not implemented for reconstructing 4D MRI, despite its practical advantages. We compared multiple image-acquisition schemes and refined a retrospective-sorting process to optimally implement a DR-derived surrogate. The comparison included an unconventional scheme that acquires paired slices alternately to mitigate the internal surrogate’s dependency on a specific slice location. We introduced ‘target-oriented sorting’, as opposed to conventional binning, to quantify the coherence in retrospectively sorted images, thereby determining the minimal scan time needed for sufficient coherence. This study focused on evaluating the proposed method using digital phantoms which provided unequivocal gold standard. The evaluation indicated that the DR-based respiratory surrogate is highly accurate: the error in amplitude percentile of the surrogate signal was less than 5% with the optimal scheme. Acquiring alternating paired slices was superior to the conventional scheme of acquiring individual slices; the advantage of the unconventional scheme was more pronounced when a substantial phase shift occurred across slice locations. The analysis of coherence across sorted images confirmed the advantage of higher sampling efficiencies in non-navigator respiratory surrogates. We determined that a scan time of 20 s per imaging slice was sufficient to achieve a mean coherence error of less than 1% for the tested respiratory patterns. The clinical applicability of the proposed 4D MRI has been demonstrated with volunteers and patients. The diaphragm motion in 4D MRI was consistent with that in dynamic 2D imaging which was regarded as the gold standard (difference within 1.8 mm on average).

  20. Travelling Wave Pulse Coupled Oscillator (TWPCO) Using a Self-Organizing Scheme for Energy-Efficient Wireless Sensor Networks.

    PubMed

    Al-Mekhlafi, Zeyad Ghaleb; Hanapi, Zurina Mohd; Othman, Mohamed; Zukarnain, Zuriati Ahmad

    2017-01-01

    Recently, Pulse Coupled Oscillator (PCO)-based travelling waves have attracted substantial attention by researchers in wireless sensor network (WSN) synchronization. Because WSNs are generally artificial occurrences that mimic natural phenomena, the PCO utilizes firefly synchronization of attracting mating partners for modelling the WSN. However, given that sensor nodes are unable to receive messages while transmitting data packets (due to deafness), the PCO model may not be efficient for sensor network modelling. To overcome this limitation, this paper proposed a new scheme called the Travelling Wave Pulse Coupled Oscillator (TWPCO). For this, the study used a self-organizing scheme for energy-efficient WSNs that adopted travelling wave biologically inspired network systems based on phase locking of the PCO model to counteract deafness. From the simulation, it was found that the proposed TWPCO scheme attained a steady state after a number of cycles. It also showed superior performance compared to other mechanisms, with a reduction in the total energy consumption of 25%. The results showed that the performance improved by 13% in terms of data gathering. Based on the results, the proposed scheme avoids the deafness that occurs in the transmit state in WSNs and increases the data collection throughout the transmission states in WSNs.

  1. Travelling Wave Pulse Coupled Oscillator (TWPCO) Using a Self-Organizing Scheme for Energy-Efficient Wireless Sensor Networks

    PubMed Central

    Hanapi, Zurina Mohd; Othman, Mohamed; Zukarnain, Zuriati Ahmad

    2017-01-01

    Recently, Pulse Coupled Oscillator (PCO)-based travelling waves have attracted substantial attention by researchers in wireless sensor network (WSN) synchronization. Because WSNs are generally artificial occurrences that mimic natural phenomena, the PCO utilizes firefly synchronization of attracting mating partners for modelling the WSN. However, given that sensor nodes are unable to receive messages while transmitting data packets (due to deafness), the PCO model may not be efficient for sensor network modelling. To overcome this limitation, this paper proposed a new scheme called the Travelling Wave Pulse Coupled Oscillator (TWPCO). For this, the study used a self-organizing scheme for energy-efficient WSNs that adopted travelling wave biologically inspired network systems based on phase locking of the PCO model to counteract deafness. From the simulation, it was found that the proposed TWPCO scheme attained a steady state after a number of cycles. It also showed superior performance compared to other mechanisms, with a reduction in the total energy consumption of 25%. The results showed that the performance improved by 13% in terms of data gathering. Based on the results, the proposed scheme avoids the deafness that occurs in the transmit state in WSNs and increases the data collection throughout the transmission states in WSNs. PMID:28056020

  2. Automated detection of masses on whole breast volume ultrasound scanner: false positive reduction using deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Hiramatsu, Yuya; Muramatsu, Chisako; Kobayashi, Hironobu; Hara, Takeshi; Fujita, Hiroshi

    2017-03-01

    Breast cancer screening with mammography and ultrasonography is expected to improve sensitivity compared with mammography alone, especially for women with dense breast. An automated breast volume scanner (ABVS) provides the operator-independent whole breast data which facilitate double reading and comparison with past exams, contralateral breast, and multimodality images. However, large volumetric data in screening practice increase radiologists' workload. Therefore, our goal is to develop a computer-aided detection scheme of breast masses in ABVS data for assisting radiologists' diagnosis and comparison with mammographic findings. In this study, false positive (FP) reduction scheme using deep convolutional neural network (DCNN) was investigated. For training DCNN, true positive and FP samples were obtained from the result of our initial mass detection scheme using the vector convergence filter. Regions of interest including the detected regions were extracted from the multiplanar reconstraction slices. We investigated methods to select effective FP samples for training the DCNN. Based on the free response receiver operating characteristic analysis, simple random sampling from the entire candidates was most effective in this study. Using DCNN, the number of FPs could be reduced by 60%, while retaining 90% of true masses. The result indicates the potential usefulness of DCNN for FP reduction in automated mass detection on ABVS images.

  3. Wideband tunable laser phase noise reduction using single sideband modulation in an electro-optical feed-forward scheme.

    PubMed

    Aflatouni, Firooz; Hashemi, Hossein

    2012-01-15

    A wideband laser phase noise reduction scheme is introduced where the optical field of a laser is single sideband modulated with an electrical signal containing the discriminated phase noise of the laser. The proof-of-concept experiments on a commercially available 1549 nm distributed feedback laser show linewidth reduction from 7.5 MHz to 1.8 kHz without using large optical cavity resonators. This feed-forward scheme performs wideband phase noise cancellation independent of the light source and, as such, it is compatible with the original laser source tunability without requiring tunable optical components. By placing the proposed phase noise reduction system after a commercial tunable laser, a tunable coherent light source with kilohertz linewidth over a tuning range of 1530-1570 nm is demonstrated.

  4. Development of non-linear finite element computer code

    NASA Technical Reports Server (NTRS)

    Becker, E. B.; Miller, T.

    1985-01-01

    Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.

  5. Optimized Controller Design for a 12-Pulse Voltage Source Converter Based HVDC System

    NASA Astrophysics Data System (ADS)

    Agarwal, Ruchi; Singh, Sanjeev

    2017-12-01

    The paper proposes an optimized controller design scheme for power quality improvement in 12-pulse voltage source converter based high voltage direct current system. The proposed scheme is hybrid combination of golden section search and successive linear search method. The paper aims at reduction of current sensor and optimization of controller. The voltage and current controller parameters are selected for optimization due to its impact on power quality. The proposed algorithm for controller optimizes the objective function which is composed of current harmonic distortion, power factor, and DC voltage ripples. The detailed designs and modeling of the complete system are discussed and its simulation is carried out in MATLAB-Simulink environment. The obtained results are presented to demonstrate the effectiveness of the proposed scheme under different transient conditions such as load perturbation, non-linear load condition, voltage sag condition, and tapped load fault under one phase open condition at both points-of-common coupling.

  6. Number-phase minimum-uncertainty state with reduced number uncertainty in a Kerr nonlinear interferometer

    NASA Astrophysics Data System (ADS)

    Kitagawa, M.; Yamamoto, Y.

    1987-11-01

    An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.

  7. High Performance ZVT with Bus Clamping Modulation Technique for Single Phase Full Bridge Inverters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yinglai; Ayyanar, Raja

    2016-03-20

    This paper proposes a topology based on bus clamping modulation and zero-voltage-transition (ZVT) technique to realize zero-voltage-switching (ZVS) for all the main switches of the full bridge inverters, and inherent ZVS and/or ZCS for the auxiliary switches. The advantages of the strategy include significant reduction in the turn-on loss of the ZVT auxiliary switches which typically account for a major part of the total loss in other ZVT circuits, and reduction in the voltage ratings of auxiliary switches. The modulation scheme and the commutation stages are analyzed in detail. Finally, a 1kW, 500 kHz switching frequency inverter of the proposedmore » topology using SiC MOSFETs has been built to validate the theoretical analysis. The ZVT with bus clamping modulation technique of fixed timing and adaptive timing schemes are implemented in DSP TMS320F28335 resulting in full ZVS for the main switches in the full bridge inverter. The proposed scheme can save up to 33 % of the switching loss compared with no ZVT case.« less

  8. Effective preemptive scheduling scheme for optical burst-switched networks with cascaded wavelength conversion consideration

    NASA Astrophysics Data System (ADS)

    Gao, Xingbo

    2010-03-01

    We introduce a new preemptive scheduling technique for next-generation optical burst switching (OBS) networks considering the impact of cascaded wavelength conversions. It has been shown that when optical bursts are transmitted all optically from source to destination, each wavelength conversion performed along the lightpath may cause certain signal-to-noise deterioration. If the distortion of the signal quality becomes significant enough, the receiver would not be able to recover the original data. Accordingly, subject to this practical impediment, we improve a recently proposed fair channel scheduling algorithm to deal with the fairness problem and aim at burst loss reduction simultaneously in OBS environments. In our scheme, the dynamic priority associated with each burst is based on a constraint threshold and the number of already conducted wavelength conversions among other factors for this burst. When contention occurs, a new arriving superior burst may preempt another scheduled one according to their priorities. Extensive simulation results have shown that the proposed scheme further improves fairness and achieves burst loss reduction as well.

  9. A numerical study of different projection-based model reduction techniques applied to computational homogenisation

    NASA Astrophysics Data System (ADS)

    Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia

    2017-10-01

    Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.

  10. On some Approximation Schemes for Steady Compressible Viscous Flow

    NASA Astrophysics Data System (ADS)

    Bause, M.; Heywood, J. G.; Novotny, A.; Padula, M.

    This paper continues our development of approximation schemes for steady compressible viscous flow based on an iteration between a Stokes like problem for the velocity and a transport equation for the density, with the aim of improving their suitability for computations. Such schemes seem attractive for computations because they offer a reduction to standard problems for which there is already highly refined software, and because of the guidance that can be drawn from an existence theory based on them. Our objective here is to modify a recent scheme of Heywood and Padula [12], to improve its convergence properties. This scheme improved upon an earlier scheme of Padula [21], [23] through the use of a special ``effective pressure'' in linking the Stokes and transport problems. However, its convergence is limited for several reasons. Firstly, the steady transport equation itself is only solvable for general velocity fields if they satisfy certain smallness conditions. These conditions are met here by using a rescaled variant of the steady transport equation based on a pseudo time step for the equation of continuity. Another matter limiting the convergence of the scheme in [12] is that the Stokes linearization, which is a linearization about zero, has an inevitably small range of convergence. We replace it here with an Oseen or Newton linearization, either of which has a wider range of convergence, and converges more rapidly. The simplicity of the scheme offered in [12] was conducive to a relatively simple and clearly organized proof of its convergence. The proofs of convergence for the more complicated schemes proposed here are structured along the same lines. They strengthen the theorems of existence and uniqueness in [12] by weakening the smallness conditions that are needed. The expected improvement in the computational performance of the modified schemes has been confirmed by Bause [2], in an ongoing investigation.

  11. Reviewing Landmark Nitrogen Cap and Trade Legislation in New Zealand's Taupo Catchment: What Have We Learned after 5+ Years?

    NASA Astrophysics Data System (ADS)

    Baisden, W. T.; Hamilton, D. P.

    2014-12-01

    In 2007, the first cap and trade legislation for a catchment nitrogen (N) budget was enacted to protect water quality in New Zealand's iconic Lake Taupo. The clarity of the 616 km² N-limited oligotrophic lake was declining due to human-induced increases in N losses from the 3,487 km² catchment. Focus was placed on reversing increases in N inputs from agriculture, and to a lesser degree sewerage sources. The legislation imposed a cap equal to 20% reduction in the N inputs to the lake, and enabled trading. The landmark legislation could have failed during appeal. Sources of disagreement included the N budgeting model and grand-parenting method that benchmarked the N leaching of individual farms. The N leaching rates for key land uses were also a major battleground, with strong effects on the viability of trading and relative value of enterprises. Sufficient science was applied to resolve the substantive issues in the appeal by 2008. Crucially, the decision recognized that N inputs to the "N cascade" mattered more than leaching evidence including land-use legacies. Other catchment cap-and-trade schemes followed. Rotorua Lakes had already capped inputs and established a ~33% N input reduction target after acceptance of a trading scheme compatible with groundwater lag times. In the Upper Manawatu catchment, a cap-and-trade scheme now governs river N loads in a more typical farming region, with an innovative allocation scheme based on the natural capital of soils. Collectively, these schemes have succeeded in imposing a cap, and signaling the intention of reductions over time. I conclude with common themes in the successes, and examine the role of science in the success and ongoing implementation. Central to success has been the role of science in framing N budgets at farm and catchment scales. Long-term data has been invaluable, despite the need to correct biases. Cap-and-trade policies alter future science needs toward reducing uncertainty in overall budgets, the ability to measure success or failure in innovative source reductions at a management scale, and defining quantitative measures of aquatic health. Broadly, the schemes have enabled a culture of innovation, in farming and research. For example, recent evidence suggests it may be possible to flip the Rotorua Lakes into a P limitation regime through alum dosing.

  12. Low-Density Parity-Check Code Design Techniques to Simplify Encoding

    NASA Astrophysics Data System (ADS)

    Perez, J. M.; Andrews, K.

    2007-11-01

    This work describes a method for encoding low-density parity-check (LDPC) codes based on the accumulate-repeat-4-jagged-accumulate (AR4JA) scheme, using the low-density parity-check matrix H instead of the dense generator matrix G. The use of the H matrix to encode allows a significant reduction in memory consumption and provides the encoder design a great flexibility. Also described are new hardware-efficient codes, based on the same kind of protographs, which require less memory storage and area, allowing at the same time a reduction in the encoding delay.

  13. Don’t make cache too complex: A simple probability-based cache management scheme for SSDs

    PubMed Central

    Cho, Sangyeun; Choi, Jongmoo

    2017-01-01

    Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme. PMID:28358897

  14. Don't make cache too complex: A simple probability-based cache management scheme for SSDs.

    PubMed

    Baek, Seungjae; Cho, Sangyeun; Choi, Jongmoo

    2017-01-01

    Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.

  15. A frequency-based window width optimized two-dimensional S-Transform profilometry

    NASA Astrophysics Data System (ADS)

    Zhong, Min; Chen, Feng; Xiao, Chao

    2017-11-01

    A new scheme is proposed to as a frequency-based window width optimized two-dimensional S-Transform profilometry, in which parameters pu and pv are introduced to control the width of a two-dimensional Gaussian window. Unlike the standard two-dimensional S-transform using the Gaussian window with window width proportional to the reciprocal local frequency of the tested signal, the size of window width for the optimized two-dimensional S-Transform varies with the pu th (pv th) power of the reciprocal local frequency fx (fy) in x (y) direction. The paper gives a detailed theoretical analysis of optimized two-dimensional S-Transform in fringe analysis as well as the characteristics of the modified Gauss window. Simulations are applied to evaluate the proposed scheme, the results show that the new scheme has better noise reduction ability and can extract phase distribution more precise in comparison with the standard two-dimensional S-transform even though the surface of the measured object varies sharply. Finally, the proposed scheme is demonstrated on three-dimensional surface reconstruction for a complex plastic cat mask to show its effectiveness.

  16. The Gravity Probe B `Niobium bird' experiment: Verifying the data reduction scheme for estimating the relativistic precession of Earth-orbiting gyroscopes

    NASA Technical Reports Server (NTRS)

    Uemaatsu, Hirohiko; Parkinson, Bradford W.; Lockhart, James M.; Muhlfelder, Barry

    1993-01-01

    Gravity Probe B (GP-B) is a relatively gyroscope experiment begun at Stanford University in 1960 and supported by NASA since 1963. This experiment will check, for the first time, the relativistic precession of an Earth-orbiting gyroscope that was predicted by Einstein's General Theory of Relativity, to an accuracy of 1 milliarcsecond per year or better. A drag-free satellite will carry four gyroscopes in a polar orbit to observe their relativistic precession. The primary sensor for measuring the direction of gyroscope spin axis is the SQUID (superconducting quantum interference device) magnetometer. The data reduction scheme designed for the GP-B program processes the signal from the SQUID magnetometer and estimates the relativistic precession rates. We formulated the data reduction scheme and designed the Niobium bird experiment to verify the performance of the data reduction scheme experimentally with an actual SQUID magnetometer within the test loop. This paper reports the results from the first phase of the Niobium bird experiment, which used a commercially available SQUID magnetometer as its primary sensor, and adresses the issues they raised. The first phase resulted in a large, temperature-dependent bias drift in the insensitive design and a temperature regulation scheme.

  17. Design of a 3-dimensional visual illusion speed reduction marking scheme.

    PubMed

    Liang, Guohua; Qian, Guomin; Wang, Ye; Yi, Zige; Ru, Xiaolei; Ye, Wei

    2017-03-01

    To determine which graphic and color combination for a 3-dimensional visual illusion speed reduction marking scheme presents the best visual stimulus, five parameters were designed. According to the Balanced Incomplete Blocks-Law of Comparative Judgment, three schemes, which produce strong stereoscopic impressions, were screened from the 25 initial design schemes of different combinations of graphics and colors. Three-dimensional experimental simulation scenes of the three screened schemes were created to evaluate four different effects according to a semantic analysis. The following conclusions were drawn: schemes with a red color are more effective than those without; the combination of red, yellow and blue produces the best visual stimulus; a larger area from the top surface and the front surface should be colored red; and a triangular prism should be painted as the graphic of the marking according to the stereoscopic impression and the coordination of graphics with the road.

  18. A Hybrid Data Compression Scheme for Power Reduction in Wireless Sensors for IoT.

    PubMed

    Deepu, Chacko John; Heng, Chun-Huat; Lian, Yong

    2017-04-01

    This paper presents a novel data compression and transmission scheme for power reduction in Internet-of-Things (IoT) enabled wireless sensors. In the proposed scheme, data is compressed with both lossy and lossless techniques, so as to enable hybrid transmission mode, support adaptive data rate selection and save power in wireless transmission. Applying the method to electrocardiogram (ECG), the data is first compressed using a lossy compression technique with a high compression ratio (CR). The residual error between the original data and the decompressed lossy data is preserved using entropy coding, enabling a lossless restoration of the original data when required. Average CR of 2.1 × and 7.8 × were achieved for lossless and lossy compression respectively with MIT/BIH database. The power reduction is demonstrated using a Bluetooth transceiver and is found to be reduced to 18% for lossy and 53% for lossless transmission respectively. Options for hybrid transmission mode, adaptive rate selection and system level power reduction make the proposed scheme attractive for IoT wireless sensors in healthcare applications.

  19. Low-Dissipation Advection Schemes Designed for Large Eddy Simulations of Hypersonic Propulsion Systems

    NASA Technical Reports Server (NTRS)

    White, Jeffrey A.; Baurle, Robert A.; Fisher, Travis C.; Quinlan, Jesse R.; Black, William S.

    2012-01-01

    The 2nd-order upwind inviscid flux scheme implemented in the multi-block, structured grid, cell centered, finite volume, high-speed reacting flow code VULCAN has been modified to reduce numerical dissipation. This modification was motivated by the desire to improve the codes ability to perform large eddy simulations. The reduction in dissipation was accomplished through a hybridization of non-dissipative and dissipative discontinuity-capturing advection schemes that reduces numerical dissipation while maintaining the ability to capture shocks. A methodology for constructing hybrid-advection schemes that blends nondissipative fluxes consisting of linear combinations of divergence and product rule forms discretized using 4th-order symmetric operators, with dissipative, 3rd or 4th-order reconstruction based upwind flux schemes was developed and implemented. A series of benchmark problems with increasing spatial and fluid dynamical complexity were utilized to examine the ability of the candidate schemes to resolve and propagate structures typical of turbulent flow, their discontinuity capturing capability and their robustness. A realistic geometry typical of a high-speed propulsion system flowpath was computed using the most promising of the examined schemes and was compared with available experimental data to demonstrate simulation fidelity.

  20. Analysis and Countermeasures of Wind Power Accommodation by Aluminum Electrolysis Pot-Lines in China

    NASA Astrophysics Data System (ADS)

    Zhang, Hongliang; Ran, Ling; He, Guixiong; Wang, Zhenyu; Li, Jie

    2017-10-01

    The unit energy consumption and its price have become the main obstacles for the future development of the aluminum electrolysis industry in China. Meanwhile, wind power is widely being abandoned because of its instability. In this study, a novel idea for wind power accommodation is proposed to achieve a win-win situation: the idea is for nearby aluminum electrolysis plants to absorb the wind power. The features of the wind power distribution and aluminum electrolysis industry are first summarized, and the concept of wind power accommodation by the aluminum industry is introduced. Then, based on the characteristics of aluminum reduction cells, the key problems, including the bus-bar status, thermal balance, and magnetohydrodynamics instabilities, are analyzed. In addition, a whole accommodation implementation plan for wind power by aluminum reduction is introduced to explain the theoretical value of accommodation, evaluation of the reduction cells, and the industrial experiment scheme. A numerical simulation of a typical scenario proves that there is large accommodation potential for the aluminum reduction cells. Aluminum electrolysis can accommodate wind power and remain stable under the proper technique and accommodation scheme, which will provide promising benefits for the aluminum plant and the wind energy plant.

  1. Data traffic reduction schemes for Cholesky factorization on asynchronous multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Naik, Vijay K.; Patrick, Merrell L.

    1989-01-01

    Communication requirements of Cholesky factorization of dense and sparse symmetric, positive definite matrices are analyzed. The communication requirement is characterized by the data traffic generated on multiprocessor systems with local and shared memory. Lower bound proofs are given to show that when the load is uniformly distributed the data traffic associated with factoring an n x n dense matrix using n to the alpha power (alpha less than or equal 2) processors is omega(n to the 2 + alpha/2 power). For n x n sparse matrices representing a square root of n x square root of n regular grid graph the data traffic is shown to be omega(n to the 1 + alpha/2 power), alpha less than or equal 1. Partitioning schemes that are variations of block assignment scheme are described and it is shown that the data traffic generated by these schemes are asymptotically optimal. The schemes allow efficient use of up to O(n to the 2nd power) processors in the dense case and up to O(n) processors in the sparse case before the total data traffic reaches the maximum value of O(n to the 3rd power) and O(n to the 3/2 power), respectively. It is shown that the block based partitioning schemes allow a better utilization of the data accessed from shared memory and thus reduce the data traffic than those based on column-wise wrap around assignment schemes.

  2. Integer cosine transform for image compression

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Pollara, F.; Shahshahani, M.

    1991-01-01

    This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.

  3. Optimized scheme in coal-fired boiler combustion based on information entropy and modified K-prototypes algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Hui; Zhu, Hongxia; Cui, Yanfeng; Si, Fengqi; Xue, Rui; Xi, Han; Zhang, Jiayu

    2018-06-01

    An integrated combustion optimization scheme is proposed for the combined considering the restriction in coal-fired boiler combustion efficiency and outlet NOx emissions. Continuous attribute discretization and reduction techniques are handled as optimization preparation by E-Cluster and C_RED methods, in which the segmentation numbers don't need to be provided in advance and can be continuously adapted with data characters. In order to obtain results of multi-objections with clustering method for mixed data, a modified K-prototypes algorithm is then proposed. This algorithm can be divided into two stages as K-prototypes algorithm for clustering number self-adaptation and clustering for multi-objective optimization, respectively. Field tests were carried out at a 660 MW coal-fired boiler to provide real data as a case study for controllable attribute discretization and reduction in boiler system and obtaining optimization parameters considering [ maxηb, minyNOx ] multi-objective rule.

  4. DebtRank-transparency: Controlling systemic risk in financial networks

    PubMed Central

    Thurner, Stefan; Poledna, Sebastian

    2013-01-01

    Nodes in a financial network, such as banks, cannot assess the true risks associated with lending to other nodes in the network, unless they have full information on the riskiness of all other nodes. These risks can be estimated by using network metrics (as DebtRank) of the interbank liability network. With a simple agent based model we show that systemic risk in financial networks can be drastically reduced by increasing transparency, i.e. making the DebtRank of individual banks visible to others, and by imposing a rule, that reduces interbank borrowing from systemically risky nodes. This scheme does not reduce the efficiency of the financial network, but fosters a more homogeneous risk-distribution within the system in a self-organized critical way. The reduction of systemic risk is due to a massive reduction of cascading failures in the transparent system. A regulation-policy implementation of the proposed scheme is discussed. PMID:23712454

  5. Graphical tensor product reduction scheme for the Lie algebras so(5) = sp(2) , su(3) , and g(2)

    NASA Astrophysics Data System (ADS)

    Vlasii, N. D.; von Rütte, F.; Wiese, U.-J.

    2016-08-01

    We develop in detail a graphical tensor product reduction scheme, first described by Antoine and Speiser, for the simple rank 2 Lie algebras so(5) = sp(2) , su(3) , and g(2) . This leads to an efficient practical method to reduce tensor products of irreducible representations into sums of such representations. For this purpose, the 2-dimensional weight diagram of a given representation is placed in a ;landscape; of irreducible representations. We provide both the landscapes and the weight diagrams for a large number of representations for the three simple rank 2 Lie algebras. We also apply the algebraic ;girdle; method, which is much less efficient for calculations by hand for moderately large representations. Computer code for reducing tensor products, based on the graphical method, has been developed as well and is available from the authors upon request.

  6. Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation

    NASA Astrophysics Data System (ADS)

    Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.

    2010-01-01

    To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.

  7. Discrete Photodetection and Susskind-Glogower Phase Operators

    NASA Technical Reports Server (NTRS)

    Ben-Aryeh, Y.

    1996-01-01

    State reduction processes in different types of photodetection experiments are described by using different kinds of ladder operators. A special model of discrete photodetection is developed by the use of superoperators which are based on the Susskind-Glogower raising and lower operators. The possibility to realize experimentally the discrete photodetection scheme in a micromaser is discussed.

  8. Some Aspects of Essentially Nonoscillatory (ENO) Formulations for the Euler Equations, Part 3

    NASA Technical Reports Server (NTRS)

    Chakravarthy, Sukumar R.

    1990-01-01

    An essentially nonoscillatory (ENO) formulation is described for hyperbolic systems of conservation laws. ENO approaches are based on smart interpolation to avoid spurious numerical oscillations. ENO schemes are a superset of Total Variation Diminishing (TVD) schemes. In the recent past, TVD formulations were used to construct shock capturing finite difference methods. At extremum points of the solution, TVD schemes automatically reduce to being first-order accurate discretizations locally, while away from extrema they can be constructed to be of higher order accuracy. The new framework helps construct essentially non-oscillatory finite difference methods without recourse to local reductions of accuracy to first order. Thus arbitrarily high orders of accuracy can be obtained. The basic general ideas of the new approach can be specialized in several ways and one specific implementation is described based on: (1) the integral form of the conservation laws; (2) reconstruction based on the primitive functions; (3) extension to multiple dimensions in a tensor product fashion; and (4) Runge-Kutta time integration. The resulting method is fourth-order accurate in time and space and is applicable to uniform Cartesian grids. The construction of such schemes for scalar equations and systems in one and two space dimensions is described along with several examples which illustrate interesting aspects of the new approach.

  9. Sequential updating of multimodal hydrogeologic parameter fields using localization and clustering techniques

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta

    2009-07-01

    Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.

  10. Combined peak-to-average power ratio reduction and physical layer security enhancement in optical orthogonal frequency division multiplexing visible-light communication systems

    NASA Astrophysics Data System (ADS)

    Wang, Zhongpeng; Chen, Shoufa

    2016-07-01

    A physical encryption scheme for discrete Hartley transform (DHT) precoded orthogonal frequency division multiplexing (OFDM) visible-light communication (VLC) systems using frequency domain chaos scrambling is proposed. In the scheme, the chaos scrambling, which is generated by a modified logistic mapping, is utilized to enhance the physical layer of security, and the DHT precoding is employed to reduce of OFDM signal for OFDM-based VLC. The influence of chaos scrambling on peak-to-average power ratio (PAPR) and bit error rate (BER) of systems is studied. The experimental simulation results prove the efficiency of the proposed encryption method for DHT-precoded, OFDM-based VLC systems. Furthermore, the influence of the proposed encryption to the PAPR and BER of systems is evaluated. The experimental results show that the proposed security scheme can protect the DHT-precoded, OFDM-based VLC from eavesdroppers, while keeping the good BER performance of DHT-precoded systems. The BER performance of the encrypted and DHT-precoded system is almost the same as that of the conventional DHT-precoded system without encryption.

  11. Evaluation of a controlled drinking minimal intervention for problem drinkers in general practice (the DRAMS scheme)

    PubMed Central

    Heather, Nick; Campion, Peter D.; Neville, Ronald G.; Maccabe, David

    1987-01-01

    Sixteen general practitioners participated in a controlled trial of the Scottish Health Education Group's DRAMS (drinking reasonably and moderately with self-control) scheme. The scheme was evaluated by randomly assigning 104 heavy or problem drinkers to three groups – a group participating in the DRAMS scheme (n = 34), a group given simple advice only (n = 32) and a non-intervention control group (n = 38). Six month follow-up information was obtained for 91 subjects (87.5% of initial sample). There were no significant differences between the groups in reduction in alcohol consumption, but patients in the DRAMS group showed a significantly greater reduction in a logarithmic measure of serum gamma-glutamyl-transpeptidase than patients in the group receiving advice only. Only 14 patients in the DRAMS group completed the full DRAMS procedure. For the sample as a whole, there was a significant reduction in alcohol consumption, a significant improvement on a measure of physical health and well-being, and significant reductions in the logarithmic measure of serum gamma-glutamyl transpeptidase and in mean corpuscular volume. The implications of these findings for future research into controlled drinking minimal interventions in general practice are discussed. PMID:3448228

  12. Learning-based traffic signal control algorithms with neighborhood information sharing: An application for sustainable mobility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, H. M. Abdul; Zhu, Feng; Ukkusuri, Satish V.

    Here, this research applies R-Markov Average Reward Technique based reinforcement learning (RL) algorithm, namely RMART, for vehicular signal control problem leveraging information sharing among signal controllers in connected vehicle environment. We implemented the algorithm in a network of 18 signalized intersections and compare the performance of RMART with fixed, adaptive, and variants of the RL schemes. Results show significant improvement in system performance for RMART algorithm with information sharing over both traditional fixed signal timing plans and real time adaptive control schemes. Additionally, the comparison with reinforcement learning algorithms including Q learning and SARSA indicate that RMART performs better atmore » higher congestion levels. Further, a multi-reward structure is proposed that dynamically adjusts the reward function with varying congestion states at the intersection. Finally, the results from test networks show significant reduction in emissions (CO, CO 2, NO x, VOC, PM 10) when RL algorithms are implemented compared to fixed signal timings and adaptive schemes.« less

  13. Impact of Publicly Financed Health Insurance Schemes on Healthcare Utilization and Financial Risk Protection in India: A Systematic Review.

    PubMed

    Prinja, Shankar; Chauhan, Akashdeep Singh; Karan, Anup; Kaur, Gunjeet; Kumar, Rajesh

    2017-01-01

    Several publicly financed health insurance schemes have been launched in India with the aim of providing universalizing health coverage (UHC). In this paper, we report the impact of publicly financed health insurance schemes on health service utilization, out-of-pocket (OOP) expenditure, financial risk protection and health status. Empirical research studies focussing on the impact or evaluation of publicly financed health insurance schemes in India were searched on PubMed, Google scholar, Ovid, Scopus, Embase and relevant websites. The studies were selected based on two stage screening PRISMA guidelines in which two researchers independently assessed the suitability and quality of the studies. The studies included in the review were divided into two groups i.e., with and without a comparison group. To assess the impact on utilization, OOP expenditure and health indicators, only the studies with a comparison group were reviewed. Out of 1265 articles screened after initial search, 43 studies were found eligible and reviewed in full text, finally yielding 14 studies which had a comparator group in their evaluation design. All the studies (n-7) focussing on utilization showed a positive effect in terms of increase in the consumption of health services with introduction of health insurance. About 70% studies (n-5) studies with a strong design and assessing financial risk protection showed no impact in reduction of OOP expenditures, while remaining 30% of evaluations (n-2), which particularly evaluated state sponsored health insurance schemes, reported a decline in OOP expenditure among the enrolled households. One study which evaluated impact on health outcome showed reduction in mortality among enrolled as compared to non-enrolled households, from conditions covered by the insurance scheme. While utilization of healthcare did improve among those enrolled in the scheme, there is no clear evidence yet to suggest that these have resulted in reduced OOP expenditures or higher financial risk protection.

  14. Impact of Publicly Financed Health Insurance Schemes on Healthcare Utilization and Financial Risk Protection in India: A Systematic Review

    PubMed Central

    Chauhan, Akashdeep Singh; Karan, Anup; Kaur, Gunjeet; Kumar, Rajesh

    2017-01-01

    Several publicly financed health insurance schemes have been launched in India with the aim of providing universalizing health coverage (UHC). In this paper, we report the impact of publicly financed health insurance schemes on health service utilization, out-of-pocket (OOP) expenditure, financial risk protection and health status. Empirical research studies focussing on the impact or evaluation of publicly financed health insurance schemes in India were searched on PubMed, Google scholar, Ovid, Scopus, Embase and relevant websites. The studies were selected based on two stage screening PRISMA guidelines in which two researchers independently assessed the suitability and quality of the studies. The studies included in the review were divided into two groups i.e., with and without a comparison group. To assess the impact on utilization, OOP expenditure and health indicators, only the studies with a comparison group were reviewed. Out of 1265 articles screened after initial search, 43 studies were found eligible and reviewed in full text, finally yielding 14 studies which had a comparator group in their evaluation design. All the studies (n-7) focussing on utilization showed a positive effect in terms of increase in the consumption of health services with introduction of health insurance. About 70% studies (n-5) studies with a strong design and assessing financial risk protection showed no impact in reduction of OOP expenditures, while remaining 30% of evaluations (n-2), which particularly evaluated state sponsored health insurance schemes, reported a decline in OOP expenditure among the enrolled households. One study which evaluated impact on health outcome showed reduction in mortality among enrolled as compared to non-enrolled households, from conditions covered by the insurance scheme. While utilization of healthcare did improve among those enrolled in the scheme, there is no clear evidence yet to suggest that these have resulted in reduced OOP expenditures or higher financial risk protection. PMID:28151946

  15. Evaluating and Improving Wind Forecasts over South China: The Role of Orographic Parameterization in the GRAPES Model

    NASA Astrophysics Data System (ADS)

    Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia

    2018-06-01

    Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.

  16. A chaotic modified-DFT encryption scheme for physical layer security and PAPR reduction in OFDM-PON

    NASA Astrophysics Data System (ADS)

    Fu, Xiaosong; Bi, Meihua; Zhou, Xuefang; Yang, Guowei; Li, Qiliang; Zhou, Zhao; Yang, Xuelin

    2018-05-01

    This letter proposes a modified discrete Fourier transform (DFT) encryption scheme with multi-dimensional chaos for the physical layer security and peak-to-average power ratio (PAPR) reduction in orthogonal frequency division multiplexing passive optical network (OFDM-PON) system. This multiple-fold encryption algorithm is mainly composed by using the column vectors permutation and the random phase encryption in the standard DFT matrix, which can create ∼10551 key space. The transmission of ∼10 Gb/s encrypted OFDM signal is verified over 20-km standard single mode fiber (SMF). Moreover, experimental results show that, the proposed scheme can achieve ∼2.6-dB PAPR reduction and ∼1-dB improvement of receiver sensitivity if compared with the common OFDM-PON.

  17. VLSI Technology for Cognitive Radio

    NASA Astrophysics Data System (ADS)

    VIJAYALAKSHMI, B.; SIDDAIAH, P.

    2017-08-01

    One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.

  18. Application of Cross-Correlation Greens Function Along With FDTD for Fast Computation of Envelope Correlation Coefficient Over Wideband for MIMO Antennas

    NASA Astrophysics Data System (ADS)

    Sarkar, Debdeep; Srivastava, Kumar Vaibhav

    2017-02-01

    In this paper, the concept of cross-correlation Green's functions (CGF) is used in conjunction with the finite difference time domain (FDTD) technique for calculation of envelope correlation coefficient (ECC) of any arbitrary MIMO antenna system over wide frequency band. Both frequency-domain (FD) and time-domain (TD) post-processing techniques are proposed for possible application with this FDTD-CGF scheme. The FDTD-CGF time-domain (FDTD-CGF-TD) scheme utilizes time-domain signal processing methods and exhibits significant reduction in ECC computation time as compared to the FDTD-CGF frequency domain (FDTD-CGF-FD) scheme, for high frequency-resolution requirements. The proposed FDTD-CGF based schemes can be applied for accurate and fast prediction of wideband ECC response, instead of the conventional scattering parameter based techniques which have several limitations. Numerical examples of the proposed FDTD-CGF techniques are provided for two-element MIMO systems involving thin-wire half-wavelength dipoles in parallel side-by-side as well as orthogonal arrangements. The results obtained from the FDTD-CGF techniques are compared with results from commercial electromagnetic solver Ansys HFSS, to verify the validity of proposed approach.

  19. Correction: All-solid-state Z-scheme system arrays of Fe2V4O13/RGO/CdS for visible light-driving photocatalytic CO2 reduction into renewable hydrocarbon fuel.

    PubMed

    Li, Ping; Zhou, Yong; Li, Haijin; Xu, Qinfeng; Meng, Xianguang; Wang, Xiaoyong; Xiao, Min; Zou, Zhigang

    2015-01-31

    Correction for 'All-solid-state Z-scheme system arrays of Fe2V4O13/RGO/CdS for visible light-driving photocatalytic CO2 reduction into renewable hydrocarbon fuel' by Ping Li et al., Chem. Commun., 2015, 51, 800-803.

  20. Speed Sensorless Induction Motor Drives for Electrical Actuators: Schemes, Trends and Tradeoffs

    NASA Technical Reports Server (NTRS)

    Elbuluk, Malik E.; Kankam, M. David

    1997-01-01

    For a decade, induction motor drive-based electrical actuators have been under investigation as potential replacement for the conventional hydraulic and pneumatic actuators in aircraft. Advantages of electric actuator include lower weight and size, reduced maintenance and operating costs, improved safety due to the elimination of hazardous fluids and high pressure hydraulic and pneumatic actuators, and increased efficiency. Recently, the emphasis of research on induction motor drives has been on sensorless vector control which eliminates flux and speed sensors mounted on the motor. Also, the development of effective speed and flux estimators has allowed good rotor flux-oriented (RFO) performance at all speeds except those close to zero. Sensorless control has improved the motor performance, compared to the Volts/Hertz (or constant flux) controls. This report evaluates documented schemes for speed sensorless drives, and discusses the trends and tradeoffs involved in selecting a particular scheme. These schemes combine the attributes of the direct and indirect field-oriented control (FOC) or use model adaptive reference systems (MRAS) with a speed-dependent current model for flux estimation which tracks the voltage model-based flux estimator. Many factors are important in comparing the effectiveness of a speed sensorless scheme. Among them are the wide speed range capability, motor parameter insensitivity and noise reduction. Although a number of schemes have been proposed for solving the speed estimation, zero-speed FOC with robustness against parameter variations still remains an area of research for speed sensorless control.

  1. Effect of synthetic jet modulation schemes on the reduction of a laminar separation bubble

    NASA Astrophysics Data System (ADS)

    Seo, J. H.; Cadieux, F.; Mittal, R.; Deem, E.; Cattafesta, L.

    2018-03-01

    The response of a laminar separation bubble to synthetic jet forcing with various modulation schemes is investigated via direct numerical simulations. A simple sinusoidal waveform is considered as a reference case, and various amplitude modulation schemes, including the square-wave "burst" modulation, are employed in the simulations. The results indicate that burst modulation is less effective at reducing the length of the flow separation than the sinusoidal forcing primarily because burst modulation is associated with a broad spectrum of input frequencies that are higher than the target frequency for the flow control. It is found that such high-frequency forcing delays vortex roll-up and promotes vortex pairing and merging, which have an adverse effect on reducing the separation bubble length. A commonly used amplitude modulation scheme is also found to have reduced effectiveness due to its spectral content. A new amplitude modulation scheme which is tailored to impart more energy at the target frequency is proposed and shown to be more effective than the other modulation schemes. Experimental measurements confirm that modulation schemes can be preserved through the actuator and used to enhance the energy content at the target modulation frequency. The present study therefore suggests that the effectiveness of synthetic jet-based flow control could be improved by carefully designing the spectral content of the modulation scheme.

  2. Control of a Robotic Hand Using a Tongue Control System-A Prosthesis Application.

    PubMed

    Johansen, Daniel; Cipriani, Christian; Popovic, Dejan B; Struijk, Lotte N S A

    2016-07-01

    The aim of this study was to investigate the feasibility of using an inductive tongue control system (ITCS) for controlling robotic/prosthetic hands and arms. This study presents a novel dual modal control scheme for multigrasp robotic hands combining standard electromyogram (EMG) with the ITCS. The performance of the ITCS control scheme was evaluated in a comparative study. Ten healthy subjects used both the ITCS control scheme and a conventional EMG control scheme to complete grasping exercises with the IH1 Azzurra robotic hand implementing five grasps. Time to activate a desired function or grasp was used as the performance metric. Statistically significant differences were found when comparing the performance of the two control schemes. On average, the ITCS control scheme was 1.15 s faster than the EMG control scheme, corresponding to a 35.4% reduction in the activation time. The largest difference was for grasp 5 with a mean AT reduction of 45.3% (2.38 s). The findings indicate that using the ITCS control scheme could allow for faster activation of specific grasps or functions compared with a conventional EMG control scheme. For transhumeral and especially bilateral amputees, the ITCS control scheme could have a significant impact on the prosthesis control. In addition, the ITCS would provide bilateral amputees with the additional advantage of environmental and computer control for which the ITCS was originally developed.

  3. Acceleration of the chemistry solver for modeling DI engine combustion using dynamic adaptive chemistry (DAC) schemes

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Liang, Long; Ge, Hai-Wen; Reitz, Rolf D.

    2010-03-01

    Acceleration of the chemistry solver for engine combustion is of much interest due to the fact that in practical engine simulations extensive computational time is spent solving the fuel oxidation and emission formation chemistry. A dynamic adaptive chemistry (DAC) scheme based on a directed relation graph error propagation (DRGEP) method has been applied to study homogeneous charge compression ignition (HCCI) engine combustion with detailed chemistry (over 500 species) previously using an R-value-based breadth-first search (RBFS) algorithm, which significantly reduced computational times (by as much as 30-fold). The present paper extends the use of this on-the-fly kinetic mechanism reduction scheme to model combustion in direct-injection (DI) engines. It was found that the DAC scheme becomes less efficient when applied to DI engine simulations using a kinetic mechanism of relatively small size and the accuracy of the original DAC scheme decreases for conventional non-premixed combustion engine. The present study also focuses on determination of search-initiating species, involvement of the NOx chemistry, selection of a proper error tolerance, as well as treatment of the interaction of chemical heat release and the fuel spray. Both the DAC schemes were integrated into the ERC KIVA-3v2 code, and simulations were conducted to compare the two schemes. In general, the present DAC scheme has better efficiency and similar accuracy compared to the previous DAC scheme. The efficiency depends on the size of the chemical kinetics mechanism used and the engine operating conditions. For cases using a small n-heptane kinetic mechanism of 34 species, 30% of the computational time is saved, and 50% for a larger n-heptane kinetic mechanism of 61 species. The paper also demonstrates that by combining the present DAC scheme with an adaptive multi-grid chemistry (AMC) solver, it is feasible to simulate a direct-injection engine using a detailed n-heptane mechanism with 543 species with practical computer time.

  4. Efficient quantum transmission in multiple-source networks.

    PubMed

    Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun

    2014-04-02

    A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency.

  5. View compensated compression of volume rendered images for remote visualization.

    PubMed

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  6. Sensitivity based coupling strengths in complex engineering systems

    NASA Technical Reports Server (NTRS)

    Bloebaum, C. L.; Sobieszczanski-Sobieski, J.

    1993-01-01

    The iterative design scheme necessary for complex engineering systems is generally time consuming and difficult to implement. Although a decomposition approach results in a more tractable problem, the inherent couplings make establishing the interdependencies of the various subsystems difficult. Another difficulty lies in identifying the most efficient order of execution for the subsystem analyses. The paper describes an approach for determining the dependencies that could be suspended during the system analysis with minimal accuracy losses, thereby reducing the system complexity. A new multidisciplinary testbed is presented, involving the interaction of structures, aerodynamics, and performance disciplines. Results are presented to demonstrate the effectiveness of the system reduction scheme.

  7. Pre-Processed Recursive Lattice Reduction for Complexity Reduction in Spatially and Temporally Correlated MIMO Channels

    NASA Astrophysics Data System (ADS)

    An, Chan-Ho; Yang, Janghoon; Jang, Seunghun; Kim, Dong Ku

    In this letter, a pre-processed lattice reduction (PLR) scheme is developed for the lattice reduction aided (LRA) detection of multiple input multiple-output (MIMO) systems in spatially correlated channel. The PLR computes the LLL-reduced matrix of the equivalent matrix, which is the product of the present channel matrix and unimodular transformation matrix for LR of spatial correlation matrix, rather than the present channel matrix itself. In conjunction with PLR followed by recursive lattice reduction (RLR) scheme [7], pre-processed RLR (PRLR) is shown to efficiently carry out the LR of the channel matrix, especially for the burst packet message in spatially and temporally correlated channel while matching the performance of conventional LRA detection.

  8. PAPR reduction in CO-OFDM systems using IPTS and modified clipping and filtering

    NASA Astrophysics Data System (ADS)

    Tong, Zheng-rong; Hu, Ya-nong; Zhang, Wei-hua

    2018-05-01

    Aiming at the problem of the peak to average power ratio ( PAPR) in coherent optical orthogonal frequency division multiplexing (CO-OFDM), a hybrid PAPR reduction technique of the CO-OFDM system by combining iterative partial transmit sequence (IPTS) scheme with modified clipping and filtering (MCF) is proposed. The simulation results show that at the complementary cumulative distribution function ( CCDF) of 10-4, the PAPR of proposed scheme is optimized by 1.86 dB and 2.13 dB compared with those of IPTS and CF schemes, respectively. Meanwhile, when the bit error rate ( BER) is 10-3, the optical signal to noise ratio ( OSNR) are optimized by 1.57 dB and 0.66 dB compared with those of CF and IPTS-CF schemes, respectively.

  9. A Maple package for computing Gröbner bases for linear recurrence relations

    NASA Astrophysics Data System (ADS)

    Gerdt, Vladimir P.; Robertz, Daniel

    2006-04-01

    A Maple package for computing Gröbner bases of linear difference ideals is described. The underlying algorithm is based on Janet and Janet-like monomial divisions associated with finite difference operators. The package can be used, for example, for automatic generation of difference schemes for linear partial differential equations and for reduction of multiloop Feynman integrals. These two possible applications are illustrated by simple examples of the Laplace equation and a one-loop scalar integral of propagator type.

  10. An opposite view data replacement approach for reducing artifacts due to metallic dental objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yazdi, Mehran; Lari, Meghdad Asadi; Bernier, Gaston

    Purpose: To present a conceptually new method for metal artifact reduction (MAR) that can be used on patients with multiple objects within the scan plane that are also of small sized along the longitudinal (scanning) direction, such as dental fillings. Methods: The proposed algorithm, named opposite view replacement, achieves MAR by first detecting the projection data affected by metal objects and then replacing the affected projections by the corresponding opposite view projections, which are not affected by metal objects. The authors also applied a fading process to avoid producing any discontinuities in the boundary of the affected projection areas inmore » the sinogram. A skull phantom with and without a variety of dental metal inserts was made to extract the performance metric of the algorithm. A head and neck case, typical of IMRT planning, was also tested. Results: The reconstructed CT images based on this new replacement scheme show a significant improvement in image quality for patients with metallic dental objects compared to the MAR algorithms based on the interpolation scheme. For the phantom, the authors showed that the artifact reduction algorithm can efficiently recover the CT numbers in the area next to the metallic objects. Conclusions: The authors presented a new and efficient method for artifact reduction due to multiple small metallic objects. The obtained results from phantoms and clinical cases fully validate the proposed approach.« less

  11. Semi-active control of helicopter vibration using controllable stiffness and damping devices

    NASA Astrophysics Data System (ADS)

    Anusonti-Inthra, Phuriwat

    Semi-active concepts for helicopter vibration reduction are developed and evaluated in this dissertation. Semi-active devices, controllable stiffness devices or controllable orifice dampers, are introduced; (i) in the blade root region (rotor-based concept) and (ii) between the rotor and the fuselage as semi-active isolators (in the non-rotating frame). Corresponding semi-active controllers for helicopter vibration reduction are also developed. The effectiveness of the rotor-based semi-active vibration reduction concept (using stiffness and damping variation) is demonstrated for a 4-bladed hingeless rotor helicopter in moderate- to high-speed forward flight. A sensitivity study shows that the stiffness variation of root element can reduce hub vibrations when proper amplitude and phase are used. Furthermore, the optimal semi-active control scheme can determine the combination of stiffness variations that produce significant vibration reduction in all components of vibratory hub loads simultaneously. It is demonstrated that desired cyclic variations in properties of the blade root region can be practically achieved using discrete controllable stiffness devices and controllable dampers, especially in the flap and lag directions. These discrete controllable devices can produce 35--50% reduction in a composite vibration index representing all components of vibratory hub loads. No detrimental increases are observed in the lower harmonics of blade loads and blade response (which contribute to the dynamic stresses) and controllable device internal loads, when the optimal stiffness and damping variations are introduced. The effectiveness of optimal stiffness and damping variations in reducing hub vibration is retained over a range of cruise speeds and for variations in fundamental rotor properties. The effectiveness of the semi-active isolator is demonstrated for a simplified single degree of freedom system representing the semi-active isolation system. The rotor, represented by a lumped mass under harmonic force excitation, is supported by a spring and a parallel damper on the fuselage (assumed to have infinite mass). Properties of the spring or damper can then be controlled to reduce transmission of the force into the fuselage or the support structure. This semi-active isolation concept can produce additional 30% vibration reduction beyond the level achieved by a passive isolator. Different control schemes (i.e. open-loop, closed-loop, and closed-loop adaptive schemes) are developed and evaluated to control transmission of vibratory loads to the support structure (fuselage), and it is seen that a closed-loop adaptive controller is required to retain vibration reduction effectiveness when there is a change in operating condition. (Abstract shortened by UMI.)

  12. An Effective Delay Reduction Approach through a Portion of Nodes with a Larger Duty Cycle for Industrial WSNs

    PubMed Central

    Wu, Minrui; Wu, Yanhui; Liu, Chuyao; Cai, Zhiping; Ma, Ming

    2018-01-01

    For Industrial Wireless Sensor Networks (IWSNs), sending data with timely style to the stink (or control center, CC) that is monitored by sensor nodes is a challenging issue. However, in order to save energy, wireless sensor networks based on a duty cycle are widely used in the industrial field, which can bring great delay to data transmission. We observe that if the duty cycle of a small number of nodes in the network is set to 1, the sleep delay caused by the duty cycle can be effectively reduced. Thus, in this paper, a novel Portion of Nodes with Larger Duty Cycle (PNLDC) scheme is proposed to reduce delay and optimize energy efficiency for IWSNs. In the PNLDC scheme, a portion of nodes are selected to set their duty cycle to 1, and the proportion of nodes with the duty cycle of 1 is determined according to the energy abundance of the area in which the node is located. The more the residual energy in the region, the greater the proportion of the selected nodes. Because there are a certain proportion of nodes with the duty cycle of 1 in the network, the PNLDC scheme can effectively reduce delay in IWSNs. The performance analysis and experimental results show that the proposed scheme significantly reduces the delay for forwarding data by 8.9~26.4% and delay for detection by 2.1~24.6% without reducing the network lifetime when compared with the fixed duty cycle method. Meanwhile, compared with the dynamic duty cycle strategy, the proposed scheme has certain advantages in terms of energy utilization and delay reduction. PMID:29757236

  13. An Effective Delay Reduction Approach through a Portion of Nodes with a Larger Duty Cycle for Industrial WSNs.

    PubMed

    Wu, Minrui; Wu, Yanhui; Liu, Chuyao; Cai, Zhiping; Xiong, Neal N; Liu, Anfeng; Ma, Ming

    2018-05-12

    For Industrial Wireless Sensor Networks (IWSNs), sending data with timely style to the stink (or control center, CC) that is monitored by sensor nodes is a challenging issue. However, in order to save energy, wireless sensor networks based on a duty cycle are widely used in the industrial field, which can bring great delay to data transmission. We observe that if the duty cycle of a small number of nodes in the network is set to 1, the sleep delay caused by the duty cycle can be effectively reduced. Thus, in this paper, a novel Portion of Nodes with Larger Duty Cycle (PNLDC) scheme is proposed to reduce delay and optimize energy efficiency for IWSNs. In the PNLDC scheme, a portion of nodes are selected to set their duty cycle to 1, and the proportion of nodes with the duty cycle of 1 is determined according to the energy abundance of the area in which the node is located. The more the residual energy in the region, the greater the proportion of the selected nodes. Because there are a certain proportion of nodes with the duty cycle of 1 in the network, the PNLDC scheme can effectively reduce delay in IWSNs. The performance analysis and experimental results show that the proposed scheme significantly reduces the delay for forwarding data by 8.9~26.4% and delay for detection by 2.1~24.6% without reducing the network lifetime when compared with the fixed duty cycle method. Meanwhile, compared with the dynamic duty cycle strategy, the proposed scheme has certain advantages in terms of energy utilization and delay reduction.

  14. Drag reduction strategies

    NASA Technical Reports Server (NTRS)

    Hill, D. Christopher

    1994-01-01

    previously a description was given of an active control scheme using wall transpiration that leads to a 15% reduction in surface skin friction beneath a turbulent boundary layer, according to direct numerical simulation. In this research brief further details of that scheme and its variants are given together with some suggestions as to how sensor/actuator arrays could be configured to reduce surface drag. The research which is summarized here was performed during the first half of 1994. This research is motivated by the need to understand better how the dynamics of near-wall turbulent flow can be modified so that skin friction is reduced. The reduction of turbulent skin friction is highly desirable in many engineering applications. Experiments and direct numerical simulations have led to an increased understanding of the cycle of turbulence production and transport in the boundary layer and raised awareness of the possibility of disrupting the process with a subsequent reduction in turbulent skin friction. The implementation of active feedback control in a computational setting is a viable approach for the investigation of the modifications to the flow physics that can be achieved. Bewley et al. and Hill describe how ideas from optimal control theory are employed to give 'sub-optimal' drag reduction schemes. The objectives of the work reported here is to investigate in greater detail the assumptions implicit within such schemes and their limitations. It is also our objective to describe how an array of sensors and actuators could be arranged and interconnected to form a 'smart' surface which has low skin friction.

  15. Finite element analysis of the end notched flexure specimen for measuring Mode II fracture toughness

    NASA Technical Reports Server (NTRS)

    Gillespie, J. W., Jr.; Carlsson, L. A.; Pipes, R. B.

    1986-01-01

    The paper presents a finite element analysis of the end-notched flexure (ENF) test specimen for Mode II interlaminar fracture testing of composite materials. Virtual crack closure and compliance techniques employed to calculate strain energy release rates from linear elastic two-dimensional analysis indicate that the ENF specimen is a pure Mode II fracture test within the constraints of small deflection theory. Furthermore, the ENF fracture specimen is shown to be relatively insensitive to process-induced cracks, offset from the laminate midplane. Frictional effects are investigated by including the contact problem in the finite element model. A parametric study investigating the influence of delamination length, span, thickness, and material properties assessed the accuracy of beam theory expressions for compliance and strain energy release rate, GII. Finite element results indicate that data reduction schemes based upon beam theory underestimate GII by approximately 20-40 percent for typical unidirectional graphite fiber composite test specimen geometries. Consequently, an improved data reduction scheme is proposed.

  16. Reduction by symmetries in singular quantum-mechanical problems: General scheme and application to Aharonov-Bohm model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smirnov, A. G., E-mail: smirnov@lpi.ru

    2015-12-15

    We develop a general technique for finding self-adjoint extensions of a symmetric operator that respects a given set of its symmetries. Problems of this type naturally arise when considering two- and three-dimensional Schrödinger operators with singular potentials. The approach is based on constructing a unitary transformation diagonalizing the symmetries and reducing the initial operator to the direct integral of a suitable family of partial operators. We prove that symmetry preserving self-adjoint extensions of the initial operator are in a one-to-one correspondence with measurable families of self-adjoint extensions of partial operators obtained by reduction. The general scheme is applied to themore » three-dimensional Aharonov-Bohm Hamiltonian describing the electron in the magnetic field of an infinitely thin solenoid. We construct all self-adjoint extensions of this Hamiltonian, invariant under translations along the solenoid and rotations around it, and explicitly find their eigenfunction expansions.« less

  17. New preemptive scheduling for OBS networks considering cascaded wavelength conversion

    NASA Astrophysics Data System (ADS)

    Gao, Xingbo; Bassiouni, Mostafa A.; Li, Guifang

    2009-05-01

    In this paper we introduce a new preemptive scheduling technique for next generation optical burst-switched networks considering the impact of cascaded wavelength conversions. It has been shown that when optical bursts are transmitted all optically from source to destination, each wavelength conversion performed along the lightpath may cause certain signal-to-noise deterioration. If the distortion of the signal quality becomes significant enough, the receiver would not be able to recover the original data. Accordingly, subject to this practical impediment, we improve a recently proposed fair channel scheduling algorithm to deal with the fairness problem and aim at burst loss reduction simultaneously in optical burst switching. In our scheme, the dynamic priority associated with each burst is based on a constraint threshold and the number of already conducted wavelength conversions among other factors for this burst. When contention occurs, a new arriving superior burst may preempt another scheduled one according to their priorities. Extensive simulation results have shown that the proposed scheme further improves fairness and achieves burst loss reduction as well.

  18. Wavelet-based Encoding Scheme for Controlling Size of Compressed ECG Segments in Telecardiology Systems.

    PubMed

    Al-Busaidi, Asiya M; Khriji, Lazhar; Touati, Farid; Rasid, Mohd Fadlee; Mnaouer, Adel Ben

    2017-09-12

    One of the major issues in time-critical medical applications using wireless technology is the size of the payload packet, which is generally designed to be very small to improve the transmission process. Using small packets to transmit continuous ECG data is still costly. Thus, data compression is commonly used to reduce the huge amount of ECG data transmitted through telecardiology devices. In this paper, a new ECG compression scheme is introduced to ensure that the compressed ECG segments fit into the available limited payload packets, while maintaining a fixed CR to preserve the diagnostic information. The scheme automatically divides the ECG block into segments, while maintaining other compression parameters fixed. This scheme adopts discrete wavelet transform (DWT) method to decompose the ECG data, bit-field preserving (BFP) method to preserve the quality of the DWT coefficients, and a modified running-length encoding (RLE) scheme to encode the coefficients. The proposed dynamic compression scheme showed promising results with a percentage packet reduction (PR) of about 85.39% at low percentage root-mean square difference (PRD) values, less than 1%. ECG records from MIT-BIH Arrhythmia Database were used to test the proposed method. The simulation results showed promising performance that satisfies the needs of portable telecardiology systems, like the limited payload size and low power consumption.

  19. Antimicrobial reduction measures applied in Danish pig herds following the introduction of the "Yellow Card" antimicrobial scheme.

    PubMed

    Dupont, Nana; Diness, Line Hummelmose; Fertner, Mette; Kristensen, Charlotte Sonne; Stege, Helle

    2017-03-01

    Following introduction of the antimicrobial restrictive "Yellow Card Scheme" in summer 2010, a rapid decrease in the Danish national pig antimicrobial consumption was observed. The aims of this study were to (i) investigate which measures had been implemented to reduce the antimicrobial consumption according to farmers and veterinarians and (ii) where possible, investigate if said measures were reflected in the herds' antimicrobial purchase data. Based on national register data from VetStat and the Central Husbandry Register, the study population was selected among Danish pig herds which had decreased their annual antimicrobial consumption with ≥10% following the introduction of the Yellow Card Scheme comparing June 1, 2009-May 31, 2010 to June 1, 2010-May 31, 2011. Subsequently, questionnaire surveys of both farmers and veterinarians were carried out, resulting in responses from 179 farmers accounting for 202 herds (response ratio: 83%) and 58 veterinarians accounting for 140 herds. Prior to the introduction of the Yellow Card Scheme, 24% of the participating herds had an antimicrobial consumption for one or more age groups which exceeded the Yellow Card Scheme threshold values on antimicrobial consumption, while 50% of the herds had an antimicrobial consumption below the national average. The measures most frequently stated as having contributed to the antimicrobial reduction were increased use of vaccines (52% of farmers; 35% of the veterinarians), less use of group medication (44% of the farmers; 58% of the veterinarians) and staff education (22% of the farmers; 26% of the veterinarians). Reduced usage of antimicrobials for oral use accounted for 89% of the total reduction in antimicrobial use. Among the farmers, 13% also stated that change in choice of product had contributed to reducing their antimicrobial consumption. However, when analyzing purchase data, no general trend was seen towards a larger purchase of products with a higher registered dosage per kg animal compared to similar products. The findings of this study indicate that implementation of antimicrobial restrictive legislation at herd-level may lead to a variety of antimicrobial reducing initiatives in both herds with a high- and herds with a low previous level of antimicrobial consumption. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. PBT assessment under REACH: Screening for low aquatic bioaccumulation with QSAR classifications based on physicochemical properties to replace BCF in vivo testing on fish.

    PubMed

    Nendza, Monika; Kühne, Ralph; Lombardo, Anna; Strempel, Sebastian; Schüürmann, Gerrit

    2018-03-01

    Aquatic bioconcentration factors (BCFs) are critical in PBT (persistent, bioaccumulative, toxic) and risk assessment of chemicals. High costs and use of more than 100 fish per standard BCF study (OECD 305) call for alternative methods to replace as much in vivo testing as possible. The BCF waiving scheme is a screening tool combining QSAR classifications based on physicochemical properties related to the distribution (hydrophobicity, ionisation), persistence (biodegradability, hydrolysis), solubility and volatility (Henry's law constant) of substances in water bodies and aquatic biota to predict substances with low aquatic bioaccumulation (nonB, BCF<2000). The BCF waiving scheme was developed with a dataset of reliable BCFs for 998 compounds and externally validated with another 181 substances. It performs with 100% sensitivity (no false negatives), >50% efficacy (waiving potential), and complies with the OECD principles for valid QSARs. The chemical applicability domain of the BCF waiving scheme is given by the structures of the training set, with some compound classes explicitly excluded like organometallics, poly- and perfluorinated compounds, aromatic triphenylphosphates, surfactants. The prediction confidence of the BCF waiving scheme is based on applicability domain compliance, consensus modelling, and the structural similarity with known nonB and B/vB substances. Compounds classified as nonB by the BCF waiving scheme are candidates for waiving of BCF in vivo testing on fish due to low concern with regard to the B criterion. The BCF waiving scheme supports the 3Rs with a possible reduction of >50% of BCF in vivo testing on fish. If the target chemical is outside the applicability domain of the BCF waiving scheme or not classified as nonB, further assessments with in silico, in vitro or in vivo methods are necessary to either confirm or reject bioaccumulative behaviour. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    PubMed

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.

  2. Efficient Quantum Transmission in Multiple-Source Networks

    PubMed Central

    Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun

    2014-01-01

    A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency. PMID:24691590

  3. Under-sampling trajectory design for compressed sensing based DCE-MRI.

    PubMed

    Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting

    2013-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.

  4. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Hixon, Duane; Sankar, L. N.

    1993-01-01

    During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.

  5. An Improved Cross-Layering Design for IPv6 Fast Handover with IEEE 802.16m Entry Before Break Handover

    NASA Astrophysics Data System (ADS)

    Kim, Ronny Yongho; Jung, Inuk; Kim, Young Yong

    IEEE 802.16m is an advanced air interface standard which is under development for IMT-Advanced systems, known as 4G systems. IEEE 802.16m is designed to provide a high data rate and a Quality of Service (QoS) level in order to meet user service requirements, and is especially suitable for mobilized environments. There are several factors that have great impact on such requirements. As one of the major factors, we mainly focus on latency issues. In IEEE 802.16m, an enhanced layer 2 handover scheme, described as Entry Before Break (EBB) was proposed and adopted to reduce handover latency. EBB provides significant handover interruption time reduction with respect to the legacy IEEE 802.16 handover scheme. Fast handovers for mobile IPv6 (FMIPv6) was standardized by Internet Engineering Task Force (IETF) in order to provide reduced handover interruption time from IP layer perspective. Since FMIPv6 utilizes link layer triggers to reduce handover latency, it is very critical to jointly design FMIPv6 with its underlying link layer protocol. However, FMIPv6 based on new handover scheme, EBB has not been proposed. In this paper, we propose an improved cross-layering design for FMIPv6 based on the IEEE 802.16m EBB handover. In comparison with the conventional FMIPv6 based on the legacy IEEE 802.16 network, the overall handover interruption time can be significantly reduced by employing the proposed design. Benefits of this improvement on latency reduction for mobile user applications are thoroughly investigated with both numerical analysis and simulation on various IP applications.

  6. Model assessment of atmospheric pollution control schemes for critical emission regions

    NASA Astrophysics Data System (ADS)

    Zhai, Shixian; An, Xingqin; Liu, Zhao; Sun, Zhaobin; Hou, Qing

    2016-01-01

    In recent years, the atmospheric environment in portions of China has become significantly degraded and the need for emission controls has become urgent. Because more international events are being planned, it is important to implement air quality assurance targeted at significant events held over specific periods of time. This study sets Yanqihu (YQH), Beijing, the location of the 2014 Beijing APEC (Asia-Pacific Economic Cooperation) summit, as the target region. By using the atmospheric inversion model FLEXPART, we determined the sensitive source zones that had the greatest impact on the air quality of the YQH region in November 2012. We then used the air-quality model Models-3/CMAQ and a high-resolution emissions inventory of the Beijing-Tianjian-Hebei region to establish emission reduction tests for the entire source area and for specific sensitive source zones. This was achieved by initiating emission reduction schemes at different ratios and different times. The results showed that initiating a moderate reduction of emissions days prior to a potential event is more beneficial to the air quality of Beijing than initiating a high-strength reduction campaign on the day of the event. The sensitive source zone of Beijing (BJ-Sens) accounts for 54.2% of the total source area of Beijing (BJ), but its reduction effect reaches 89%-100% of the total area, with a reduction efficiency 1.6-1.9 times greater than that of the entire area. The sensitive source zone of Huabei (HuaB-Sens.) only represents 17.6% of the total area of Huabei (HuaB), but its emission reduction effect reaches 59%-97% of the entire area, with a reduction efficiency 4.2-5.5 times greater than that of the total area. The earlier that emission reduction measures are implemented, the greater the effect they have on preventing the transmission of pollutants. In addition, expanding the controlling areas to sensitive provinces and cities around Beijing (HuaB-sens) can significantly accelerate the reduction effects compared to controlling measures only in the Beijing sensitive source zone (BJ-Sens). Therefore, when enacting emission reduction schemes, cooperating with surrounding provinces and cities, as well as narrowing the reduction scope to specific sensitive source zones prior to unfavorable meteorological conditions, can help reduce emissions control costs and improve the efficiency and maneuverability of emission reduction schemes.

  7. Integrating funds for health and social care: an evidence review.

    PubMed

    Mason, Anne; Goddard, Maria; Weatherly, Helen; Chalkley, Martin

    2015-07-01

    Integrated funds for health and social care are one possible way of improving care for people with complex care requirements. If integrated funds facilitate coordinated care, this could support improvements in patient experience, and health and social care outcomes, reduce avoidable hospital admissions and delayed discharges, and so reduce costs. In this article, we examine whether this potential has been realized in practice. We propose a framework based on agency theory for understanding the role that integrated funding can play in promoting coordinated care, and review the evidence to see whether the expected effects are realized in practice. We searched eight electronic databases and relevant websites, and checked reference lists of reviews and empirical studies. We extracted data on the types of funding integration used by schemes, their benefits and costs (including unintended effects), and the barriers to implementation. We interpreted our findings with reference to our framework. The review included 38 schemes from eight countries. Most of the randomized evidence came from Australia, with nonrandomized comparative evidence available from Australia, Canada, England, Sweden and the US. None of the comparative evidence isolated the effect of integrated funding; instead, studies assessed the effects of 'integrated financing plus integrated care' (i.e. 'integration') relative to usual care. Most schemes (24/38) assessed health outcomes, of which over half found no significant impact on health. The impact of integration on secondary care costs or use was assessed in 34 schemes. In 11 schemes, integration had no significant effect on secondary care costs or utilisation. Only three schemes reported significantly lower secondary care use compared with usual care. In the remaining 19 schemes, the evidence was mixed or unclear. Some schemes achieved short-term reductions in delayed discharges, but there was anecdotal evidence of unintended consequences such as premature hospital discharge and heightened risk of readmission. No scheme achieved a sustained reduction in hospital use. The primary barrier was the difficulty of implementing financial integration, despite the existence of statutory and regulatory support. Even where funds were successfully pooled, budget holders' control over access to services remained limited. Barriers in the form of differences in performance frameworks, priorities and governance were prominent amongst the UK schemes, whereas difficulties in linking different information systems were more widespread. Despite these barriers, many schemes - including those that failed to improve health or reduce costs - reported that access to care had improved. Some of these schemes revealed substantial levels of unmet need and so total costs increased. It is often assumed in policy that integrating funding will promote integrated care, and lead to better health outcomes and lower costs. Both our agency theory-based framework and the evidence indicate that the link is likely to be weak. Integrated care may uncover unmet need. Resolving this can benefit both individuals and society, but total care costs are likely to rise. Provided that integration delivers improvements in quality of life, even with additional costs, it may, nonetheless, offer value for money. © The Author(s) 2015.

  8. Integrating funds for health and social care: an evidence review

    PubMed Central

    Goddard, Maria; Weatherly, Helen; Chalkley, Martin

    2015-01-01

    Objectives Integrated funds for health and social care are one possible way of improving care for people with complex care requirements. If integrated funds facilitate coordinated care, this could support improvements in patient experience, and health and social care outcomes, reduce avoidable hospital admissions and delayed discharges, and so reduce costs. In this article, we examine whether this potential has been realized in practice. Methods We propose a framework based on agency theory for understanding the role that integrated funding can play in promoting coordinated care, and review the evidence to see whether the expected effects are realized in practice. We searched eight electronic databases and relevant websites, and checked reference lists of reviews and empirical studies. We extracted data on the types of funding integration used by schemes, their benefits and costs (including unintended effects), and the barriers to implementation. We interpreted our findings with reference to our framework. Results The review included 38 schemes from eight countries. Most of the randomized evidence came from Australia, with nonrandomized comparative evidence available from Australia, Canada, England, Sweden and the US. None of the comparative evidence isolated the effect of integrated funding; instead, studies assessed the effects of ‘integrated financing plus integrated care’ (i.e. ‘integration’) relative to usual care. Most schemes (24/38) assessed health outcomes, of which over half found no significant impact on health. The impact of integration on secondary care costs or use was assessed in 34 schemes. In 11 schemes, integration had no significant effect on secondary care costs or utilisation. Only three schemes reported significantly lower secondary care use compared with usual care. In the remaining 19 schemes, the evidence was mixed or unclear. Some schemes achieved short-term reductions in delayed discharges, but there was anecdotal evidence of unintended consequences such as premature hospital discharge and heightened risk of readmission. No scheme achieved a sustained reduction in hospital use. The primary barrier was the difficulty of implementing financial integration, despite the existence of statutory and regulatory support. Even where funds were successfully pooled, budget holders’ control over access to services remained limited. Barriers in the form of differences in performance frameworks, priorities and governance were prominent amongst the UK schemes, whereas difficulties in linking different information systems were more widespread. Despite these barriers, many schemes – including those that failed to improve health or reduce costs – reported that access to care had improved. Some of these schemes revealed substantial levels of unmet need and so total costs increased. Conclusions It is often assumed in policy that integrating funding will promote integrated care, and lead to better health outcomes and lower costs. Both our agency theory-based framework and the evidence indicate that the link is likely to be weak. Integrated care may uncover unmet need. Resolving this can benefit both individuals and society, but total care costs are likely to rise. Provided that integration delivers improvements in quality of life, even with additional costs, it may, nonetheless, offer value for money. PMID:25595287

  9. Linear approximations of global behaviors in nonlinear systems with moderate or strong noise

    NASA Astrophysics Data System (ADS)

    Liang, Junhao; Din, Anwarud; Zhou, Tianshou

    2018-03-01

    While many physical or chemical systems can be modeled by nonlinear Langevin equations (LEs), dynamical analysis of these systems is challenging in the cases of moderate and strong noise. Here we develop a linear approximation scheme, which can transform an often intractable LE into a linear set of binomial moment equations (BMEs). This scheme provides a feasible way to capture nonlinear behaviors in the sense of probability distribution and is effective even when the noise is moderate or big. Based on BMEs, we further develop a noise reduction technique, which can effectively handle tough cases where traditional small-noise theories are inapplicable. The overall method not only provides an approximation-based paradigm to analysis of the local and global behaviors of nonlinear noisy systems but also has a wide range of applications.

  10. A multichannel EEG acquisition scheme based on single ended amplifiers and digital DRL.

    PubMed

    Haberman, Marcelo Alejandro; Spinelli, Enrique Mario

    2012-12-01

    Single ended (SE) amplifiers allow implementing biopotential front-ends with a reduced number of parts, being well suited for preamplified electrodes or compact EEG headboxes. On the other hand, given that each channel has independent gain; mismatching between these gains results in poor common-mode rejection ratios (CMRRs) (about 30 dB considering 1% tolerance components). This work proposes a scheme for multichannel EEG acquisition systems based on SE amplifiers and a novel digital driven right leg (DDRL) circuit, which overcome the poor CMRR of the front-end stage providing a high common mode reduction at power line frequency (up to 80 dB). A functional prototype was built and tested showing the feasibility of the proposed technique. It provided EEG records with negligible power line interference, even in very aggressive EMI environments.

  11. Faster and less phototoxic 3D fluorescence microscopy using a versatile compressed sensing scheme

    PubMed Central

    Woringer, Maxime; Darzacq, Xavier; Zimmer, Christophe

    2017-01-01

    Three-dimensional fluorescence microscopy based on Nyquist sampling of focal planes faces harsh trade-offs between acquisition time, light exposure, and signal-to-noise. We propose a 3D compressed sensing approach that uses temporal modulation of the excitation intensity during axial stage sweeping and can be adapted to fluorescence microscopes without hardware modification. We describe implementations on a lattice light sheet microscope and an epifluorescence microscope, and show that images of beads and biological samples can be reconstructed with a 5-10 fold reduction of light exposure and acquisition time. Our scheme opens a new door towards faster and less damaging 3D fluorescence microscopy. PMID:28788909

  12. Non-linear eigensolver-based alternative to traditional SCF methods

    NASA Astrophysics Data System (ADS)

    Gavin, B.; Polizzi, E.

    2013-05-01

    The self-consistent procedure in electronic structure calculations is revisited using a highly efficient and robust algorithm for solving the non-linear eigenvector problem, i.e., H({ψ})ψ = Eψ. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm to account for the non-linearity of the Hamiltonian with the occupied eigenvectors. Using a series of numerical examples and the density functional theory-Kohn/Sham model, it will be shown that our approach can outperform the traditional SCF mixing-scheme techniques by providing a higher converge rate, convergence to the correct solution regardless of the choice of the initial guess, and a significant reduction of the eigenvalue solve time in simulations.

  13. Optimal feedback control of turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Bewley, Thomas; Choi, Haecheon; Temam, Roger; Moin, Parviz

    1993-01-01

    Feedback control equations were developed and tested for computing wall normal control velocities to control turbulent flow in a channel with the objective of reducing drag. The technique used is the minimization of a 'cost functional' which is constructed to represent some balance of the drag integrated over the wall and the net control effort. A distribution of wall velocities is found which minimizes this cost functional some time shortly in the future based on current observations of the flow near the wall. Preliminary direct numerical simulations of the scheme applied to turbulent channel flow indicates it provides approximately 17 percent drag reduction. The mechanism apparent when the scheme is applied to a simplified flow situation is also discussed.

  14. On the representation matrices of the spin permutation group. [for atomic and molecular electronic structures

    NASA Technical Reports Server (NTRS)

    Wilson, S.

    1977-01-01

    A method is presented for the determination of the representation matrices of the spin permutation group (symmetric group), a detailed knowledge of these matrices being required in the study of the electronic structure of atoms and molecules. The method is characterized by the use of two different coupling schemes. Unlike the Yamanouchi spin algebraic scheme, the method is not recursive. The matrices for the fundamental transpositions can be written down directly in one of the two bases. The method results in a computationally significant reduction in the number of matrix elements that have to be stored when compared with, say, the standard Young tableaux group theoretical approach.

  15. PCM-Based Durable Write Cache for Fast Disk I/O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhuo; Wang, Bin; Carpenter, Patrick

    2012-01-01

    Flash based solid-state devices (FSSDs) have been adopted within the memory hierarchy to improve the performance of hard disk drive (HDD) based storage system. However, with the fast development of storage-class memories, new storage technologies with better performance and higher write endurance than FSSDs are emerging, e.g., phase-change memory (PCM). Understanding how to leverage these state-of-the-art storage technologies for modern computing systems is important to solve challenging data intensive computing problems. In this paper, we propose to leverage PCM for a hybrid PCM-HDD storage architecture. We identify the limitations of traditional LRU caching algorithms for PCM-based caches, and develop amore » novel hash-based write caching scheme called HALO to improve random write performance of hard disks. To address the limited durability of PCM devices and solve the degraded spatial locality in traditional wear-leveling techniques, we further propose novel PCM management algorithms that provide effective wear-leveling while maximizing access parallelism. We have evaluated this PCM-based hybrid storage architecture using applications with a diverse set of I/O access patterns. Our experimental results demonstrate that the HALO caching scheme leads to an average reduction of 36.8% in execution time compared to the LRU caching scheme, and that the SFC wear leveling extends the lifetime of PCM by a factor of 21.6.« less

  16. Elements of EAF automation processes

    NASA Astrophysics Data System (ADS)

    Ioana, A.; Constantin, N.; Dragna, E. C.

    2017-01-01

    Our article presents elements of Electric Arc Furnace (EAF) automation. So, we present and analyze detailed two automation schemes: the scheme of electrical EAF automation system; the scheme of thermic EAF automation system. The application results of these scheme of automation consists in: the sensitive reduction of specific consummation of electrical energy of Electric Arc Furnace, increasing the productivity of Electric Arc Furnace, increase the quality of the developed steel, increasing the durability of the building elements of Electric Arc Furnace.

  17. Predominant-period site classification for response spectra prediction equations in Italy

    USGS Publications Warehouse

    Di Alessandro, Carola; Bonilla, Luis Fabian; Boore, David M.; Rovelli, Antonio; Scotti, Oona

    2012-01-01

    We propose a site‐classification scheme based on the predominant period of the site, as determined from the average horizontal‐to‐vertical (H/V) spectral ratios of ground motion. Our scheme extends Zhao et al. (2006) classifications by adding two classes, the most important of which is defined by flat H/V ratios with amplitudes less than 2. The proposed classification is investigated by using 5%‐damped response spectra from Italian earthquake records. We select a dataset of 602 three‐component analog and digital recordings from 120 earthquakes recorded at 214 seismic stations within a hypocentral distance of 200 km. Selected events are in the moment‐magnitude range 4.0≤Mw≤6.8 and focal depths from a few kilometers to 46 km. We computed H/V ratios for these data and used them to classify each site into one of six classes. We then investigate the impact of this classification scheme on empirical ground‐motion prediction equations (GMPEs) by comparing its performance with that of the conventional rock/soil classification. Although the adopted approach results in only a small reduction of the overall standard deviation, the use of H/V spectral ratios in site classification does capture the signature of sites with flat frequency‐response, as well as deep and shallow‐soil profiles, characterized by long‐ and short‐period resonance, respectively; in addition, the classification scheme is relatively quick and inexpensive, which is an advantage over schemes based on measurements of shear‐wave velocity.

  18. Comparison of Respiratory Disease Prevalence among Voluntary Monitoring Systems for Pig Health and Welfare in the UK.

    PubMed

    Eze, J I; Correia-Gomes, C; Borobia-Belsué, J; Tucker, A W; Sparrow, D; Strachan, D W; Gunn, G J

    2015-01-01

    Surveillance of animal diseases provides information essential for the protection of animal health and ultimately public health. The voluntary pig health schemes, implemented in the United Kingdom, are integrated systems which capture information on different macroscopic disease conditions detected in slaughtered pigs. Many of these conditions have been associated with a reduction in performance traits and consequent increases in production costs. The schemes are the Wholesome Pigs Scotland in Scotland, the BPEX Pig Health Scheme in England and Wales and the Pig Regen Ltd. health and welfare checks done in Northern Ireland. This report set out to compare the prevalence of four respiratory conditions (enzootic pneumonia-like lesions, pleurisy, pleuropneumonia lesions and abscesses in the lung) assessed by these three Pig Health Schemes. The seasonal variations and year trends associated with the conditions in each scheme are presented. The paper also highlights the differences in prevalence for each condition across these schemes and areas where further research is needed. A general increase in the prevalence of enzootic pneumonia like lesions was observed in Scotland, England and Wales since 2009, while a general decrease was observed in Northern Ireland over the years of the scheme. Pleurisy prevalence has increased since 2010 in all three schemes, whilst pleuropneumonia has been decreasing. Prevalence of abscesses in the lung has decreased in England, Wales and Northern Ireland but has increased in Scotland. This analysis highlights the value of surveillance schemes based on abattoir pathology monitoring of four respiratory lesions. The outputs at scheme level have significant value as indicators of endemic and emerging disease, and for producers and herd veterinarians in planning and evaluating herd health control programs when comparing individual farm results with national averages.

  19. High-bandwidth generation of duobinary and alternate-mark-inversion modulation formats using SOA-based signal processing.

    PubMed

    Dailey, James M; Power, Mark J; Webb, Roderick P; Manning, Robert J

    2011-12-19

    We report on the novel all-optical generation of duobinary (DB) and alternate-mark-inversion (AMI) modulation formats at 42.6 Gb/s from an input on-off keyed signal. The modulation converter consists of two semiconductor optical amplifier (SOA)-based Mach-Zehnder interferometer gates. A detailed SOA model numerically confirms the operational principles and experimental data shows successful AMI and DB conversion at 42.6 Gb/s. We also predict that the operational bandwidth can be extended beyond 40 Gb/s by utilizing a new pattern-effect suppression scheme, and demonstrate dramatic reductions in patterning up to 160 Gb/s. We show an increasing trade-off between pattern-effect reduction and mean output power with increasing bitrate.

  20. Adaptive Critic Neural Network-Based Terminal Area Energy Management and Approach and Landing Guidance

    NASA Technical Reports Server (NTRS)

    Grantham, Katie

    2003-01-01

    Reusable Launch Vehicles (RLVs) have different mission requirements than the Space Shuttle, which is used for benchmark guidance design. Therefore, alternative Terminal Area Energy Management (TAEM) and Approach and Landing (A/L) Guidance schemes can be examined in the interest of cost reduction. A neural network based solution for a finite horizon trajectory optimization problem is presented in this paper. In this approach the optimal trajectory of the vehicle is produced by adaptive critic based neural networks, which were trained off-line to maintain a gradual glideslope.

  1. Scheduled Relaxation Jacobi method: Improvements and applications

    NASA Astrophysics Data System (ADS)

    Adsuara, J. E.; Cordero-Carrión, I.; Cerdá-Durán, P.; Aloy, M. A.

    2016-09-01

    Elliptic partial differential equations (ePDEs) appear in a wide variety of areas of mathematics, physics and engineering. Typically, ePDEs must be solved numerically, which sets an ever growing demand for efficient and highly parallel algorithms to tackle their computational solution. The Scheduled Relaxation Jacobi (SRJ) is a promising class of methods, atypical for combining simplicity and efficiency, that has been recently introduced for solving linear Poisson-like ePDEs. The SRJ methodology relies on computing the appropriate parameters of a multilevel approach with the goal of minimizing the number of iterations needed to cut down the residuals below specified tolerances. The efficiency in the reduction of the residual increases with the number of levels employed in the algorithm. Applying the original methodology to compute the algorithm parameters with more than 5 levels notably hinders obtaining optimal SRJ schemes, as the mixed (non-linear) algebraic-differential system of equations from which they result becomes notably stiff. Here we present a new methodology for obtaining the parameters of SRJ schemes that overcomes the limitations of the original algorithm and provide parameters for SRJ schemes with up to 15 levels and resolutions of up to 215 points per dimension, allowing for acceleration factors larger than several hundreds with respect to the Jacobi method for typical resolutions and, in some high resolution cases, close to 1000. Most of the success in finding SRJ optimal schemes with more than 10 levels is based on an analytic reduction of the complexity of the previously mentioned system of equations. Furthermore, we extend the original algorithm to apply it to certain systems of non-linear ePDEs.

  2. Characteristic-based algorithms for flows in thermo-chemical nonequilibrium

    NASA Technical Reports Server (NTRS)

    Walters, Robert W.; Cinnella, Pasquale; Slack, David C.; Halt, David

    1990-01-01

    A generalized finite-rate chemistry algorithm with Steger-Warming, Van Leer, and Roe characteristic-based flux splittings is presented in three-dimensional generalized coordinates for the Navier-Stokes equations. Attention is placed on convergence to steady-state solutions with fully coupled chemistry. Time integration schemes including explicit m-stage Runge-Kutta, implicit approximate-factorization, relaxation and LU decomposition are investigated and compared in terms of residual reduction per unit of CPU time. Practical issues such as code vectorization and memory usage on modern supercomputers are discussed.

  3. SU-C-206-07: A Practical Sparse View Ultra-Low Dose CT Acquisition Scheme for PET Attenuation Correction in the Extended Scan Field-Of-View

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, J; Fan, J; Gopinatha Pillai, A

    Purpose: To further reduce CT dose, a practical sparse-view acquisition scheme is proposed to provide the same attenuation estimation as higher dose for PET imaging in the extended scan field-of-view. Methods: CT scans are often used for PET attenuation correction and can be acquired at very low CT radiation dose. Low dose techniques often employ low tube voltage/current accompanied with a smooth filter before backprojection to reduce CT image noise. These techniques can introduce bias in the conversion from HU to attenuation values, especially in the extended CT scan field-of-view (FOV). In this work, we propose an ultra-low dose CTmore » technique for PET attenuation correction based on sparse-view acquisition. That is, instead of an acquisition of full amount of views, only a fraction of views are acquired. We tested this technique on a 64-slice GE CT scanner using multiple phantoms. CT scan FOV truncation completion was performed based on the published water-cylinder extrapolation algorithm. A number of continuous views per rotation: 984 (full), 246, 123, 82 and 62 have been tested, corresponding to a CT dose reduction of none, 4x, 8x, 12x and 16x. We also simulated sparse-view acquisition by skipping views from the fully-acquired view data. Results: FBP reconstruction with Q. AC filter on reduced views in the full extended scan field-of-view possesses similar image quality to the reconstruction on acquired full view data. The results showed a further potential for dose reduction compared to the full acquisition, without sacrificing any significant attenuation support to the PET. Conclusion: With the proposed sparse-view method, one can potential achieve at least 2x more CT dose reduction compared to the current Ultra-Low Dose (ULD) PET/CT protocol. A pre-scan based dose modulation scheme can be combined with the above sparse-view approaches, which can even further reduce the CT scan dose during a PET/CT exam.« less

  4. A LATIN-based model reduction approach for the simulation of cycling damage

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Mainak; Fau, Amelie; Nackenhorst, Udo; Néron, David; Ladevèze, Pierre

    2017-11-01

    The objective of this article is to introduce a new method including model order reduction for the life prediction of structures subjected to cycling damage. Contrary to classical incremental schemes for damage computation, a non-incremental technique, the LATIN method, is used herein as a solution framework. This approach allows to introduce a PGD model reduction technique which leads to a drastic reduction of the computational cost. The proposed framework is exemplified for structures subjected to cyclic loading, where damage is considered to be isotropic and micro-defect closure effects are taken into account. A difficulty herein for the use of the LATIN method comes from the state laws which can not be transformed into linear relations through an internal variable transformation. A specific treatment of this issue is introduced in this work.

  5. A comparative study of European insurance schemes for extreme weather risks and incentives for risk reduction

    NASA Astrophysics Data System (ADS)

    de Ruiter, Marleen; Hudson, Paul; de Ruig, Lars; Kuik, Onno; Botzen, Wouter

    2017-04-01

    This paper provides an analysis of the insurance schemes that cover extreme weather events in twelve different EU countries and the risk reduction incentives offered by these schemes. Economic impacts of extreme weather events in many regions in Europe and elsewhere are on the rise due to climate change and increasing exposure as driven by urban development. In an attempt to manage impacts from extreme weather events, natural disaster insurance schemes can provide incentives for taking measures that limit weather-related risks. Insurance companies can influence public risk management policies and risk-reducing behaviour of policyholders by "rewarding behaviour that reduces risks and potential damages" (Botzen and Van den Bergh, 2008, p. 417). Examples of insurance market systems that directly or indirectly aim to incentivize risk reduction with varying degrees of success are: the U.S. National Flood Insurance Programme; the French Catastrophes Naturelles system; and the U.K. Flood Re program which requires certain levels of protection standards for properties to be insurable. In our analysis, we distinguish between four different disaster types (i.e. coastal and fluvial floods, droughts and storms) and three different sectors (i.e. residential, commercial and agriculture). The selected case studies also provide a wide coverage of different insurance market structures, including public, private and public-private insurance provision, and different methods of coping with extreme loss events, such as re-insurance, governmental aid and catastrophe bonds. The analysis of existing mechanisms for risk reduction incentives provides recommendations about incentivizing adaptive behaviour, in order to assist policy makers and other stakeholders in designing more effective insurance schemes for extreme weather risks.

  6. Lessons from community-based payment for ecosystem service schemes: from forests to rangelands.

    PubMed

    Dougill, Andrew J; Stringer, Lindsay C; Leventon, Julia; Riddell, Mike; Rueff, Henri; Spracklen, Dominick V; Butt, Edward

    2012-11-19

    Climate finance investments and international policy are driving new community-based projects incorporating payments for ecosystem services (PES) to simultaneously store carbon and generate livelihood benefits. Most community-based PES (CB-PES) research focuses on forest areas. Rangelands, which store globally significant quantities of carbon and support many of the world's poor, have seen little CB-PES research attention, despite benefitting from several decades of community-based natural resource management (CBNRM) projects. Lessons from CBNRM suggest institutional considerations are vital in underpinning the design and implementation of successful community projects. This study uses documentary analysis to explore the institutional characteristics of three African community-based forest projects that seek to deliver carbon-storage and poverty-reduction benefits. Strong existing local institutions, clear land tenure, community control over land management decision-making and up-front, flexible payment schemes are found to be vital. Additionally, we undertake a global review of rangeland CBNRM literature and identify that alongside the lessons learned from forest projects, rangeland CB-PES project design requires specific consideration of project boundaries, benefit distribution, capacity building for community monitoring of carbon storage together with awareness-raising using decision-support tools to display the benefits of carbon-friendly land management. We highlight that institutional analyses must be undertaken alongside improved scientific studies of the carbon cycle to enable links to payment schemes, and for them to contribute to poverty alleviation in rangelands.

  7. Lessons from community-based payment for ecosystem service schemes: from forests to rangelands

    PubMed Central

    Dougill, Andrew J.; Stringer, Lindsay C.; Leventon, Julia; Riddell, Mike; Rueff, Henri; Spracklen, Dominick V.; Butt, Edward

    2012-01-01

    Climate finance investments and international policy are driving new community-based projects incorporating payments for ecosystem services (PES) to simultaneously store carbon and generate livelihood benefits. Most community-based PES (CB-PES) research focuses on forest areas. Rangelands, which store globally significant quantities of carbon and support many of the world's poor, have seen little CB-PES research attention, despite benefitting from several decades of community-based natural resource management (CBNRM) projects. Lessons from CBNRM suggest institutional considerations are vital in underpinning the design and implementation of successful community projects. This study uses documentary analysis to explore the institutional characteristics of three African community-based forest projects that seek to deliver carbon-storage and poverty-reduction benefits. Strong existing local institutions, clear land tenure, community control over land management decision-making and up-front, flexible payment schemes are found to be vital. Additionally, we undertake a global review of rangeland CBNRM literature and identify that alongside the lessons learned from forest projects, rangeland CB-PES project design requires specific consideration of project boundaries, benefit distribution, capacity building for community monitoring of carbon storage together with awareness-raising using decision-support tools to display the benefits of carbon-friendly land management. We highlight that institutional analyses must be undertaken alongside improved scientific studies of the carbon cycle to enable links to payment schemes, and for them to contribute to poverty alleviation in rangelands. PMID:23045714

  8. Performance Analysis of Cluster Formation in Wireless Sensor Networks.

    PubMed

    Montiel, Edgar Romo; Rivero-Angeles, Mario E; Rubino, Gerardo; Molina-Lozano, Heron; Menchaca-Mendez, Rolando; Menchaca-Mendez, Ricardo

    2017-12-13

    Clustered-based wireless sensor networks have been extensively used in the literature in order to achieve considerable energy consumption reductions. However, two aspects of such systems have been largely overlooked. Namely, the transmission probability used during the cluster formation phase and the way in which cluster heads are selected. Both of these issues have an important impact on the performance of the system. For the former, it is common to consider that sensor nodes in a clustered-based Wireless Sensor Network (WSN) use a fixed transmission probability to send control data in order to build the clusters. However, due to the highly variable conditions experienced by these networks, a fixed transmission probability may lead to extra energy consumption. In view of this, three different transmission probability strategies are studied: optimal, fixed and adaptive. In this context, we also investigate cluster head selection schemes, specifically, we consider two intelligent schemes based on the fuzzy C-means and k-medoids algorithms and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different transmission probability schemes.

  9. Performance Analysis of Cluster Formation in Wireless Sensor Networks

    PubMed Central

    Montiel, Edgar Romo; Rivero-Angeles, Mario E.; Rubino, Gerardo; Molina-Lozano, Heron; Menchaca-Mendez, Rolando; Menchaca-Mendez, Ricardo

    2017-01-01

    Clustered-based wireless sensor networks have been extensively used in the literature in order to achieve considerable energy consumption reductions. However, two aspects of such systems have been largely overlooked. Namely, the transmission probability used during the cluster formation phase and the way in which cluster heads are selected. Both of these issues have an important impact on the performance of the system. For the former, it is common to consider that sensor nodes in a clustered-based Wireless Sensor Network (WSN) use a fixed transmission probability to send control data in order to build the clusters. However, due to the highly variable conditions experienced by these networks, a fixed transmission probability may lead to extra energy consumption. In view of this, three different transmission probability strategies are studied: optimal, fixed and adaptive. In this context, we also investigate cluster head selection schemes, specifically, we consider two intelligent schemes based on the fuzzy C-means and k-medoids algorithms and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different transmission probability schemes. PMID:29236065

  10. Group-based microfinance for collective empowerment: a systematic review of health impacts

    PubMed Central

    Pennington, Andy; Nayak, Shilpa; Sowden, Amanda; White, Martin; Whitehead, Margaret

    2016-01-01

    Abstract Objective To assess the impact on health-related outcomes, of group microfinance schemes based on collective empowerment. Methods We searched the databases Social Sciences Citation Index, Embase, MEDLINE, MEDLINE In-Process, PsycINFO, Social Policy & Practice and Conference Proceedings Citation Index for articles published between 1 January 1980 and 29 February 2016. Articles reporting on health impacts associated with group-based microfinance were included in a narrative synthesis. Findings We identified one cluster-randomized control trial and 22 quasi-experimental studies. All of the included interventions targeted poor women living in low- or middle-income countries. Some included a health-promotion component. The results of the higher quality studies indicated an association between membership of a microfinance scheme and improvements in the health of women and their children. The observed improvements included reduced maternal and infant mortality, better sexual health and, in some cases, lower levels of interpersonal violence. According to the results of the few studies in which changes in empowerment were measured, membership of the relatively large and well-established microfinance schemes generally led to increased empowerment but this did not necessarily translate into improved health outcomes. Qualitative evidence suggested that increased empowerment may have contributed to observed improvements in contraceptive use and mental well-being and reductions in the risk of violence from an intimate partner. Conclusion Membership of the larger, well-established group-based microfinance schemes is associated with improvements in some health outcomes. Future studies need to be designed to cope better with bias and to assess negative as well as positive social and health impacts. PMID:27708475

  11. Group-based microfinance for collective empowerment: a systematic review of health impacts.

    PubMed

    Orton, Lois; Pennington, Andy; Nayak, Shilpa; Sowden, Amanda; White, Martin; Whitehead, Margaret

    2016-09-01

    To assess the impact on health-related outcomes, of group microfinance schemes based on collective empowerment. We searched the databases Social Sciences Citation Index, Embase, MEDLINE, MEDLINE In-Process, PsycINFO, Social Policy & Practice and Conference Proceedings Citation Index for articles published between 1 January 1980 and 29 February 2016. Articles reporting on health impacts associated with group-based microfinance were included in a narrative synthesis. We identified one cluster-randomized control trial and 22 quasi-experimental studies. All of the included interventions targeted poor women living in low- or middle-income countries. Some included a health-promotion component. The results of the higher quality studies indicated an association between membership of a microfinance scheme and improvements in the health of women and their children. The observed improvements included reduced maternal and infant mortality, better sexual health and, in some cases, lower levels of interpersonal violence. According to the results of the few studies in which changes in empowerment were measured, membership of the relatively large and well-established microfinance schemes generally led to increased empowerment but this did not necessarily translate into improved health outcomes. Qualitative evidence suggested that increased empowerment may have contributed to observed improvements in contraceptive use and mental well-being and reductions in the risk of violence from an intimate partner. Membership of the larger, well-established group-based microfinance schemes is associated with improvements in some health outcomes. Future studies need to be designed to cope better with bias and to assess negative as well as positive social and health impacts.

  12. Susceptibility of the Batoka Gorge hydroelectric scheme to climate change

    NASA Astrophysics Data System (ADS)

    Harrison, Gareth P.; Whittington, H.(Bert) W.

    2002-07-01

    The continuing and increased use of renewable energy sources, including hydropower, is a key strategy to limit the extent of future climate change. Paradoxically, climate change itself may alter the availability of this natural resource, adversely affecting the financial viability of both existing and potential schemes. Here, a model is described that enables the assessment of the relationship between changes in climate and the viability, technical and financial, of hydro development. The planned Batoka Gorge scheme on the Zambezi River is used as a case study to validate the model and to predict the impact of climate change on river flows, electricity production and scheme financial performance. The model was found to perform well, given the inherent difficulties in the task, although there is concern regarding the ability of the hydrological model to reproduce the historic flow conditions of the upper Zambezi Basin. Simulations with climate change scenarios illustrate the sensitivity of the Batoka Gorge scheme to changes in climate. They suggest significant reductions in river flows, declining power production, reductions in electricity sales revenue and consequently an adverse impact on a range of investment measures.

  13. The effectiveness of measures to reduce the man-made greenhouse effect. The application of a Climate-policy Model

    NASA Astrophysics Data System (ADS)

    Jain, A. K.; Bach, W.

    1994-06-01

    In this paper we briefly describe the characteristics and the performance of our 1-D Muenster Climate Model. The model system consists of coupled models including gas cycle models, an energy balance model and a sea level rise model. The chemical feedback mechanisms among greenhouse gases are not included. This model, which is a scientifically-based parameterized simulation model, is used here primarily to help assess the effectiveness of various plausible policy options in mitigating the additional man-made greenhouse warming and the resulting sea level rise. For setting priorities it is important to assess the effectiveness of the various measures by which the greenhouse effect can be reduced. To this end we take a Scenario Business-as-Usual as a reference case (Leggett et al., 1992) and study the mitigating effects of the following four packages of measures: The Copenhagen Agreements on CFC, HCFC, and halon reduction (GECR, 1992), the Tropical Forest Preservation Plan of the Climate Enquete-Commission of the German Parliament on CO2 reduction (ECGP, 1990), a detailed reduction scheme for energy-related CO2 (ECGP, 1990), and a preliminary scheme for CH4, CO, and N2O reduction (Bach and Jain, 1992 1993). The required reduction depends, among others, on the desired climate and ecosystem protection. This is defined by the Enquete-Commission and others as a mean global rate of surface temperature change of ca. 0.1 °C per decade — assumed to be critical to many ecosystems — and a mean global warming ceiling of ca. 2 °C in 2100 relative to 1860. Our results show that the Copenhagen Agreements, the Tropical Forest Preservation Plan, the energy-related CO2 reduction scheme, and the CH4 and N2O reduction schemes could mitigate the anthropogenic greenhouse warming by ca. 12%, 6%, 35%, and 9% respectively. Taken together, all four packages of measures could reduce the man-made greenhouse effect by more than 60% until 2100; i.e. over the climate sensitivity range 2.5 °C (1.5 to 4.5) for 2 × CO2, the warming could be reduced from 3.5 °C (2.4 to 5.0) without specific measures to 1.3 °C (0.9 to 2.0) with the above packages of measures; and likewise, the mean global sea level rise could be reduced from 65 cm (46 to 88) without specific measures to 32 cm (22 to 47) with the above measures. Finally, the model results also emphasize the importance of trace gases other than CO2 in mitigating additional man-made greenhouse warming. According to our preliminary estimates, CH4 could in the short term make a sizable contribution to the reduction of the greenhouse effect (because of its relatively short lifetime of 10 yr), as could N2O in the medium and long term (with a relatively long lifetime of 150 yr).

  14. A fast efficient implicit scheme for the gasdynamic equations using a matrix reduction technique

    NASA Technical Reports Server (NTRS)

    Barth, T. J.; Steger, J. L.

    1985-01-01

    An efficient implicit finite-difference algorithm for the gasdynamic equations utilizing matrix reduction techniques is presented. A significant reduction in arithmetic operations is achieved without loss of the stability characteristics generality found in the Beam and Warming approximate factorization algorithm. Steady-state solutions to the conservative Euler equations in generalized coordinates are obtained for transonic flows and used to show that the method offers computational advantages over the conventional Beam and Warming scheme. Existing Beam and Warming codes can be retrofit with minimal effort. The theoretical extension of the matrix reduction technique to the full Navier-Stokes equations in Cartesian coordinates is presented in detail. Linear stability, using a Fourier stability analysis, is demonstrated and discussed for the one-dimensional Euler equations.

  15. Reduction of catastrophic health care expenditures by a community-based health insurance scheme in Gujarat, India: current experiences and challenges.

    PubMed Central

    Ranson, Michael Kent

    2002-01-01

    OBJECTIVE: To assess the Self Employed Women's Association's Medical Insurance Fund in Gujarat in terms of insurance coverage according to income groups, protection of claimants from costs of hospitalization, time between discharge and reimbursement, and frequency of use. METHODS: One thousand nine hundred and thirty claims submitted over six years were analysed. FINDINGS: Two hundred and fifteen (11%) of 1927 claims were rejected. The mean household income of claimants was significantly lower than that of the general population. The percentage of households below the poverty line was similar for claimants and the general population. One thousand seven hundred and twelve (1712) claims were reimbursed: 805 (47%) fully and 907 (53%) at a mean reimbursement rate of 55.6%. Reimbursement more than halved the percentage of catastrophic hospitalizations (>10% of annual household income) and hospitalizations resulting in impoverishment. The average time between discharge and reimbursement was four months. The frequency of submission of claims was low (18.0/1000 members per year: 22-37% of the estimated frequency of hospitalization). CONCLUSIONS: The findings have implications for community-based health insurance schemes in India and elsewhere. Such schemes can protect poor households against the uncertain risk of medical expenses. They can be implemented in areas where institutional capacity is too weak to organize nationwide risk-pooling. Such schemes can cover poor people, including people and households below the poverty line. A trade off exists between maintaining the scheme's financial viability and protecting members against catastrophic expenditures. To facilitate reimbursement, administration, particularly processing of claims, should happen near claimants. Fine-tuning the design of a scheme is an ongoing process - a system of monitoring and evaluation is vital. PMID:12219151

  16. Interfacial coupling induced direct Z scheme water splitting in metal-free photocatalyst: C3N/g-C3N4 heterojunctions.

    PubMed

    Wang, Jiajun; Li, Xiaoting; You, Ya; Xintong, Yang; Wang, Ying; Li, Qunxiang

    2018-06-21

    Mimicking the natural photosynthesis in green plants, artificial Z-scheme photocatalysis enables more efficient utilization of solar energy for photocatalytic water splitting. Most currently designed g-C3N4-based Z-scheme heterojunctions are usually based on metal-containing semiconductor photocatalysts, thus exploiting metal-free photocatalysts for Z-scheme water splitting is of huge interest. Herein, we propose two metal-free C3N/g-C3N4 heterojunctions with the C3N monolayer covering g-C3N4 sheet (monolayer or bilayer) and systematically explore their electronic structures, charge distributions and photocatalytic properties by performing extensive hybrid density functional calculations. We clearly reveal that the relative strong built-in electric fields around their respective interface regions, caused by the charge transfer from C3N monolayer to g-C3N4 monolayer or bilayer, result in the bands bending, renders the transfer of photogenerated carriers in these two heterojunctions following the Z-scheme instead of the type-II pathway. Moreover, the photogenerated electrons and holes in these two C3N/g-C3N4 heterojunctions not only can be efficiently separated, but also have strong redox abilities for water oxidation and reduction. Compared with the isolated g-C3N4 sheets, the light absorption in visible to near-infrared region are significantly enhanced in these proposed heterojunctions. These theoretical findings suggest that these proposed metal-free C3N/g-C3N4 heterojunctions are promising direct Z-scheme photocatalysts for solar water splitting. © 2018 IOP Publishing Ltd.

  17. Reduction of catastrophic health care expenditures by a community-based health insurance scheme in Gujarat, India: current experiences and challenges.

    PubMed

    Ranson, Michael Kent

    2002-01-01

    To assess the Self Employed Women's Association's Medical Insurance Fund in Gujarat in terms of insurance coverage according to income groups, protection of claimants from costs of hospitalization, time between discharge and reimbursement, and frequency of use. One thousand nine hundred and thirty claims submitted over six years were analysed. Two hundred and fifteen (11%) of 1927 claims were rejected. The mean household income of claimants was significantly lower than that of the general population. The percentage of households below the poverty line was similar for claimants and the general population. One thousand seven hundred and twelve (1712) claims were reimbursed: 805 (47%) fully and 907 (53%) at a mean reimbursement rate of 55.6%. Reimbursement more than halved the percentage of catastrophic hospitalizations (>10% of annual household income) and hospitalizations resulting in impoverishment. The average time between discharge and reimbursement was four months. The frequency of submission of claims was low (18.0/1000 members per year: 22-37% of the estimated frequency of hospitalization). The findings have implications for community-based health insurance schemes in India and elsewhere. Such schemes can protect poor households against the uncertain risk of medical expenses. They can be implemented in areas where institutional capacity is too weak to organize nationwide risk-pooling. Such schemes can cover poor people, including people and households below the poverty line. A trade off exists between maintaining the scheme's financial viability and protecting members against catastrophic expenditures. To facilitate reimbursement, administration, particularly processing of claims, should happen near claimants. Fine-tuning the design of a scheme is an ongoing process - a system of monitoring and evaluation is vital.

  18. Complexity reduction in the H.264/AVC using highly adaptive fast mode decision based on macroblock motion activity

    NASA Astrophysics Data System (ADS)

    Abdellah, Skoudarli; Mokhtar, Nibouche; Amina, Serir

    2015-11-01

    The H.264/AVC video coding standard is used in a wide range of applications from video conferencing to high-definition television according to its high compression efficiency. This efficiency is mainly acquired from the newly allowed prediction schemes including variable block modes. However, these schemes require a high complexity to select the optimal mode. Consequently, complexity reduction in the H.264/AVC encoder has recently become a very challenging task in the video compression domain, especially when implementing the encoder in real-time applications. Fast mode decision algorithms play an important role in reducing the overall complexity of the encoder. In this paper, we propose an adaptive fast intermode algorithm based on motion activity, temporal stationarity, and spatial homogeneity. This algorithm predicts the motion activity of the current macroblock from its neighboring blocks and identifies temporal stationary regions and spatially homogeneous regions using adaptive threshold values based on content video features. Extensive experimental work has been done in high profile, and results show that the proposed source-coding algorithm effectively reduces the computational complexity by 53.18% on average compared with the reference software encoder, while maintaining the high-coding efficiency of H.264/AVC by incurring only 0.097 dB in total peak signal-to-noise ratio and 0.228% increment on the total bit rate.

  19. A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bal, Guillaume, E-mail: gb2030@columbia.edu; Davis, Anthony B., E-mail: Anthony.B.Davis@jpl.nasa.gov; Kavli Institute for Theoretical Physics, Kohn Hall, University of California, Santa Barbara, CA 93106-4030

    2011-08-20

    Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or amore » airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.« less

  20. [Tongue base reduction with radiofrequency energy in sleep apnea].

    PubMed

    Stuck, B A; Maurer, J T; Hörmann, K

    2001-07-01

    Tongue base reduction with temperature-controlled radiofrequency for the treatment of obstructive sleep apnea syndrome is a minimally invasive technique. Repeated application leads to a progressive shrinking of the tissue. In our study, we summarize the experiences gained from 100 tongue base reductions and compare them with the pilot study that was recently published. An intensified treatment scheme was used with higher amounts of energy applied per treatment session. Visual analogue scales were used for the assessment of postoperative pain and functional parameters. Regular follow-up visits were scheduled to evaluate postoperative complications. Postoperative pain was mostly mild or moderate. Paraoperative complications were not observed. The overall rate for postoperative complications was 8%, with 2% mild and 5% moderate complications. One severe complication--a tongue base abscedation--was observed. Using para- and postoperative antibiotic prophylaxis reduced the rate of complications. Functional parameters such as taste or swallowing were not affected. Our results underline the safety of the procedure and demonstrate the minimal para- and postoperative morbidity. The increased amount of energy applied per session has not led to an increase in postoperative morbidity.

  1. Turbulent boundary layer under the control of different schemes

    NASA Astrophysics Data System (ADS)

    Qiao, Z. X.; Zhou, Y.; Wu, Z.

    2017-06-01

    This work explores experimentally the control of a turbulent boundary layer over a flat plate based on wall perturbation generated by piezo-ceramic actuators. Different schemes are investigated, including the feed-forward, the feedback, and the combined feed-forward and feedback strategies, with a view to suppressing the near-wall high-speed events and hence reducing skin friction drag. While the strategies may achieve a local maximum drag reduction slightly less than their counterpart of the open-loop control, the corresponding duty cycles are substantially reduced when compared with that of the open-loop control. The results suggest a good potential to cut down the input energy under these control strategies. The fluctuating velocity, spectra, Taylor microscale and mean energy dissipation are measured across the boundary layer with and without control and, based on the measurements, the flow mechanism behind the control is proposed.

  2. Turbulent boundary layer under the control of different schemes.

    PubMed

    Qiao, Z X; Zhou, Y; Wu, Z

    2017-06-01

    This work explores experimentally the control of a turbulent boundary layer over a flat plate based on wall perturbation generated by piezo-ceramic actuators. Different schemes are investigated, including the feed-forward, the feedback, and the combined feed-forward and feedback strategies, with a view to suppressing the near-wall high-speed events and hence reducing skin friction drag. While the strategies may achieve a local maximum drag reduction slightly less than their counterpart of the open-loop control, the corresponding duty cycles are substantially reduced when compared with that of the open-loop control. The results suggest a good potential to cut down the input energy under these control strategies. The fluctuating velocity, spectra, Taylor microscale and mean energy dissipation are measured across the boundary layer with and without control and, based on the measurements, the flow mechanism behind the control is proposed.

  3. Turbulent boundary layer under the control of different schemes

    PubMed Central

    Zhou, Y.; Wu, Z.

    2017-01-01

    This work explores experimentally the control of a turbulent boundary layer over a flat plate based on wall perturbation generated by piezo-ceramic actuators. Different schemes are investigated, including the feed-forward, the feedback, and the combined feed-forward and feedback strategies, with a view to suppressing the near-wall high-speed events and hence reducing skin friction drag. While the strategies may achieve a local maximum drag reduction slightly less than their counterpart of the open-loop control, the corresponding duty cycles are substantially reduced when compared with that of the open-loop control. The results suggest a good potential to cut down the input energy under these control strategies. The fluctuating velocity, spectra, Taylor microscale and mean energy dissipation are measured across the boundary layer with and without control and, based on the measurements, the flow mechanism behind the control is proposed. PMID:28690409

  4. Optimal design of wind barriers using 3D computational fluid dynamics simulations

    NASA Astrophysics Data System (ADS)

    Fang, H.; Wu, X.; Yang, X.

    2017-12-01

    Desertification is a significant global environmental and ecological problem that requires human-regulated control and management. Wind barriers are commonly used to reduce wind velocity or trap drifting sand in arid or semi-arid areas. Therefore, optimal design of wind barriers becomes critical in Aeolian engineering. In the current study, we perform 3D computational fluid dynamics (CFD) simulations for flow passing through wind barriers with different structural parameters. To validate the simulation results, we first inter-compare the simulated flow field results with those from both wind-tunnel experiments and field measurements. Quantitative analyses of the shelter effect are then conducted based on a series of simulations with different structural parameters (such as wind barrier porosity, row numbers, inter-row spacing and belt schemes). The results show that wind barriers with porosity of 0.35 could provide the longest shelter distance (i.e., where the wind velocity reduction is more than 50%) thus are recommended in engineering designs. To determine the optimal row number and belt scheme, we introduce a cost function that takes both wind-velocity reduction effects and economical expense into account. The calculated cost function show that a 3-row-belt scheme with inter-row spacing of 6h (h as the height of wind barriers) and inter-belt spacing of 12h is the most effective.

  5. Reduction and reconstruction of the dynamics of nonholonomic systems

    NASA Astrophysics Data System (ADS)

    Cortés, Jorge; de León, Manuel

    1999-12-01

    The reduction and reconstruction of the dynamics of nonholonomic mechanical systems with symmetry are investigated. We have considered a more general framework of constrained Hamiltonian systems since they appear in the reduction procedure. A reduction scheme in terms of the nonholonomic momentum mapping is developed. The reduction of the nonholonomic brackets is also discussed. The theory is illustrated with several examples.

  6. Relevance feedback-based building recognition

    NASA Astrophysics Data System (ADS)

    Li, Jing; Allinson, Nigel M.

    2010-07-01

    Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.

  7. Economics of internal and external energy storage in solar power plant operation

    NASA Technical Reports Server (NTRS)

    Manvi, R.; Fujita, T.

    1977-01-01

    A simple approach is formulated to investigate the effect of energy storage on the bus-bar electrical energy cost of solar thermal power plants. Economic analysis based on this approach does not require detailed definition of a specific storage system. A wide spectrum of storage system candidates ranging from hot water to superconducting magnets can be studied based on total investment and a rough knowledge of energy in and out efficiencies. Preliminary analysis indicates that internal energy storage (thermal) schemes offer better opportunities for energy cost reduction than external energy storage (nonthermal) schemes for solar applications. Based on data and assumptions used in JPL evaluation studies, differential energy costs due to storage are presented for a 100 MWe solar power plant by varying the energy capacity. The simple approach presented in this paper provides useful insight regarding the operation of energy storage in solar power plant applications, while also indicating a range of design parameters where storage can be cost effective.

  8. TREDI: A self consistent three-dimensional integration scheme for RF-gun dynamics based on the Lienard-Wiechert potentials formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giannessi, Luca; Quattromini, Marcello

    1997-06-01

    We describe the model for the simulation of charged beam dynamics in radiofrequency injectors used in the three dimensional code TREDI, where the inclusion of space charge fields is obtained by means of the Lienard-Wiechert retarded potentials. The problem of charge screening is analyzed in covariant form and some general recipes for charge assignment and noise reduction are given.

  9. Verification and accreditation schemes for climate change activities: A review of requirements for verification of greenhouse gas reductions and accreditation of verifiers—Implications for long-term carbon sequestration

    NASA Astrophysics Data System (ADS)

    Roed-Larsen, Trygve; Flach, Todd

    The purpose of this chapter is to provide a review of existing national and international requirements for verification of greenhouse gas reductions and associated accreditation of independent verifiers. The credibility of results claimed to reduce or remove anthropogenic emissions of greenhouse gases (GHG) is of utmost importance for the success of emerging schemes to reduce such emissions. Requirements include transparency, accuracy, consistency, and completeness of the GHG data. The many independent verification processes that have developed recently now make up a quite elaborate tool kit for best practices. The UN Framework Convention for Climate Change and the Kyoto Protocol specifications for project mechanisms initiated this work, but other national and international actors also work intensely with these issues. One initiative gaining wide application is that taken by the World Business Council for Sustainable Development with the World Resources Institute to develop a "GHG Protocol" to assist companies in arranging for auditable monitoring and reporting processes of their GHG activities. A set of new international standards developed by the International Organization for Standardization (ISO) provides specifications for the quantification, monitoring, and reporting of company entity and project-based activities. The ISO is also developing specifications for recognizing independent GHG verifiers. This chapter covers this background with intent of providing a common understanding of all efforts undertaken in different parts of the world to secure the reliability of GHG emission reduction and removal activities. These verification schemes may provide valuable input to current efforts of securing a comprehensive, trustworthy, and robust framework for verification activities of CO2 capture, transport, and storage.

  10. Dynamic adaptive chemistry with operator splitting schemes for reactive flow simulations

    NASA Astrophysics Data System (ADS)

    Ren, Zhuyin; Xu, Chao; Lu, Tianfeng; Singer, Michael A.

    2014-04-01

    A numerical technique that uses dynamic adaptive chemistry (DAC) with operator splitting schemes to solve the equations governing reactive flows is developed and demonstrated. Strang-based splitting schemes are used to separate the governing equations into transport fractional substeps and chemical reaction fractional substeps. The DAC method expedites the numerical integration of reaction fractional substeps by using locally valid skeletal mechanisms that are obtained using the directed relation graph (DRG) reduction method to eliminate unimportant species and reactions from the full mechanism. Second-order temporal accuracy of the Strang-based splitting schemes with DAC is demonstrated on one-dimensional, unsteady, freely-propagating, premixed methane/air laminar flames with detailed chemical kinetics and realistic transport. The use of DAC dramatically reduces the CPU time required to perform the simulation, and there is minimal impact on solution accuracy. It is shown that with DAC the starting species and resulting skeletal mechanisms strongly depend on the local composition in the flames. In addition, the number of retained species may be significant only near the flame front region where chemical reactions are significant. For the one-dimensional methane/air flame considered, speed-up factors of three and five are achieved over the entire simulation for GRI-Mech 3.0 and USC-Mech II, respectively. Greater speed-up factors are expected for larger chemical kinetics mechanisms.

  11. Aeroelastic Tailoring of Transport Wings Including Transonic Flutter Constraints

    NASA Technical Reports Server (NTRS)

    Stanford, Bret K.; Wieseman, Carol D.; Jutte, Christine V.

    2015-01-01

    Several minimum-mass optimization problems are solved to evaluate the effectiveness of a variety of novel tailoring schemes for subsonic transport wings. Aeroelastic stress and panel buckling constraints are imposed across several trimmed static maneuver loads, in addition to a transonic flutter margin constraint, captured with aerodynamic influence coefficient-based tools. Tailoring with metallic thickness variations, functionally graded materials, balanced or unbalanced composite laminates, curvilinear tow steering, and distributed trailing edge control effectors are all found to provide reductions in structural wing mass with varying degrees of success. The question as to whether this wing mass reduction will offset the increased manufacturing cost is left unresolved for each case.

  12. Treatment of mechanically sorted organic waste by bioreactor landfill: Experimental results and preliminary comparative impact assessment with biostabilization and conventional landfill.

    PubMed

    Di Maria, Francesco; Micale, Caterina; Sisani, Luciano; Rotondi, Luca

    2016-09-01

    Treatment and disposal of the mechanically sorted organic fraction (MSOF) of municipal solid waste using a full-scale hybrid bioreactor landfill was experimentally analyzed. A preliminary life cycle assessment was used to compare the hybrid bioreactor landfill with the conventional scheme based on aerobic biostabilization plus landfill. The main findings showed that hybrid bioreactor landfill was able to achieve a dynamic respiration index (DRI)<1000 mgO2/(kgVSh) in 20weeks, on average. Landfill gas (LFG) generation with CH4 concentration >55% v/v started within 140days from MSOF disposal, allowing prompt energy recovery and higher collection efficiency. With the exception of fresh water eutrophication with the bioreactor scenario there was a reduction of the impact categories by about 30% compared to the conventional scheme. Such environmental improvement was mainly a consequence of the reduction of direct and indirect emissions from conventional aerobic biostabilization and of the lower amount of gaseous loses from the bioreactor landfill. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A case study of view-factor rectification procedures for diffuse-gray radiation enclosure computations

    NASA Technical Reports Server (NTRS)

    Taylor, Robert P.; Luck, Rogelio

    1995-01-01

    The view factors which are used in diffuse-gray radiation enclosure calculations are often computed by approximate numerical integrations. These approximately calculated view factors will usually not satisfy the important physical constraints of reciprocity and closure. In this paper several view-factor rectification algorithms are reviewed and a rectification algorithm based on a least-squares numerical filtering scheme is proposed with both weighted and unweighted classes. A Monte-Carlo investigation is undertaken to study the propagation of view-factor and surface-area uncertainties into the heat transfer results of the diffuse-gray enclosure calculations. It is found that the weighted least-squares algorithm is vastly superior to the other rectification schemes for the reduction of the heat-flux sensitivities to view-factor uncertainties. In a sample problem, which has proven to be very sensitive to uncertainties in view factor, the heat transfer calculations with weighted least-squares rectified view factors are very good with an original view-factor matrix computed to only one-digit accuracy. All of the algorithms had roughly equivalent effects on the reduction in sensitivity to area uncertainty in this case study.

  14. Accounting for large deformations in real-time simulations of soft tissues based on reduced-order models.

    PubMed

    Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F

    2012-01-01

    Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  15. An energy and potential enstrophy conserving scheme for the shallow water equations. [orography effects on atmospheric circulation

    NASA Technical Reports Server (NTRS)

    Arakawa, A.; Lamb, V. R.

    1979-01-01

    A three-dimensional finite difference scheme for the solution of the shallow water momentum equations which accounts for the conservation of potential enstrophy in the flow of a homogeneous incompressible shallow atmosphere over steep topography as well as for total energy conservation is presented. The scheme is derived to be consistent with a reasonable scheme for potential vorticity advection in a long-term integration for a general flow with divergent mass flux. Numerical comparisons of the characteristics of the present potential enstrophy-conserving scheme with those of a scheme that conserves potential enstrophy only for purely horizontal nondivergent flow are presented which demonstrate the reduction of computational noise in the wind field with the enstrophy-conserving scheme and its convergence even in relatively coarse grids.

  16. Numerical simulation of supersonic and hypersonic inlet flow fields

    NASA Technical Reports Server (NTRS)

    Mcrae, D. Scott; Kontinos, Dean A.

    1995-01-01

    This report summarizes the research performed by North Carolina State University and NASA Ames Research Center under Cooperative Agreement NCA2-719, 'Numerical Simulation of Supersonic and Hypersonic Inlet Flow Fields". Four distinct rotated upwind schemes were developed and investigated to determine accuracy and practicality. The scheme found to have the best combination of attributes, including reduction to grid alignment with no rotation, was the cell centered non-orthogonal (CCNO) scheme. In 2D, the CCNO scheme improved rotation when flux interpolation was extended to second order. In 3D, improvements were less dramatic in all cases, with second order flux interpolation showing the least improvement over grid aligned upwinding. The reduction in improvement is attributed to uncertainty in determining optimum rotation angle and difficulty in performing accurate and efficient interpolation of the angle in 3D. The CCNO rotational technique will prove very useful for increasing accuracy when second order interpolation is not appropriate and will materially improve inlet flow solutions.

  17. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  18. The Role of Education in Agricultural Projects for Food Security and Poverty Reduction in Kenya

    NASA Astrophysics Data System (ADS)

    Walingo, Mary Khakoni

    2006-05-01

    Agricultural development projects have been promoted in many places as a feature of poverty-reduction strategies. Such projects have often been implemented without a strong in-built education component, and hence have had little success. Agricultural projects seek to improve food security by diversifying a household's resource base and facilitating the social and economic empowerment of women. The present study presents a survey designed to assess the relationship between education level and ability to benefit from dairy-development projects in Kenya. Results reveal higher occupation and employment levels among beneficiary than non-beneficiary households. On the other hand, beneficiaries of poverty-reduction schemes require specialized training. Apart from project-specific training, the level of general education alone cannot predict the attainment of project objectives.

  19. Beam debunching due to ISR-induced energy diffusion

    DOE PAGES

    Yampolsky, Nikolai A.; Carlsten, Bruce E.

    2017-06-20

    One of the options for increasing longitudinal coherency of X-ray free electron lasers (XFELs) is seeding with a microbunched electron beam. Several schemes leading to significant amplitude of the beam bunching at X-ray wavelengths were recently proposed. All these schemes rely on beam optics having several magnetic dipoles. While the beam passes through a dipole, its energy spread increases due to quantum effects of synchrotron radiation. As a result, the bunching factor at small wavelengths reduces since electrons having different energies follow different trajectories in the bend. We rigorously calculate the reduction in the bunching factor due to incoherent synchrotronmore » the radiation while the beam travels in an arbitrary beamline. Lastly, we apply general results to estimate reduction of harmonic current in common schemes proposed for XFEL seeding.« less

  20. Coordinated single-phase control scheme for voltage unbalance reduction in low voltage network.

    PubMed

    Pullaguram, Deepak; Mishra, Sukumar; Senroy, Nilanjan

    2017-08-13

    Low voltage (LV) distribution systems are typically unbalanced in nature due to unbalanced loading and unsymmetrical line configuration. This situation is further aggravated by single-phase power injections. A coordinated control scheme is proposed for single-phase sources, to reduce voltage unbalance. A consensus-based coordination is achieved using a multi-agent system, where each agent estimates the averaged global voltage and current magnitudes of individual phases in the LV network. These estimated values are used to modify the reference power of individual single-phase sources, to ensure system-wide balanced voltages and proper power sharing among sources connected to the same phase. Further, the high X / R ratio of the filter, used in the inverter of the single-phase source, enables control of reactive power, to minimize voltage unbalance locally. The proposed scheme is validated by simulating a LV distribution network with multiple single-phase sources subjected to various perturbations.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).

  1. Symmetry-breaking inelastic wave-mixing atomic magnetometry.

    PubMed

    Zhou, Feng; Zhu, Chengjie J; Hagley, Edward W; Deng, Lu

    2017-12-01

    The nonlinear magneto-optical rotation (NMOR) effect has prolific applications ranging from precision mapping of Earth's magnetic field to biomagnetic sensing. Studies on collisional spin relaxation effects have led to ultrahigh magnetic field sensitivities using a single-beam Λ scheme with state-of-the-art magnetic shielding/compensation techniques. However, the NMOR effect in this widely used single-beam Λ scheme is peculiarly small, requiring complex radio-frequency phase-locking protocols. We show the presence of a previously unknown energy symmetry-based nonlinear propagation blockade and demonstrate an optical inelastic wave-mixing NMOR technique that breaks this NMOR blockade, resulting in an NMOR optical signal-to-noise ratio (SNR) enhancement of more than two orders of magnitude never before seen with the single-beam Λ scheme. The large SNR enhancement was achieved simultaneously with a nearly two orders of magnitude reduction in laser power while preserving the magnetic resonance linewidth. This new method may open a myriad of applications ranging from biomagnetic imaging to precision measurement of the magnetic properties of subatomic particles.

  2. An Iterative Information-Reduced Quadriphase-Shift-Keyed Carrier Synchronization Scheme Using Decision Feedback for Low Signal-to-Noise Ratio Applications

    NASA Technical Reports Server (NTRS)

    Simon, M.; Tkacenko, A.

    2006-01-01

    In a previous publication [1], an iterative closed-loop carrier synchronization scheme for binary phase-shift keyed (BPSK) modulation was proposed that was based on feeding back data decisions to the input of the loop, the purpose being to remove the modulation prior to carrier synchronization as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. The idea there was that, with sufficient independence between the received data and the decisions on it that are fed back (as would occur in an error-correction coding environment with sufficient decoding delay), a pure tone in the presence of noise would ultimately be produced (after sufficient iteration and low enough error probability) and thus could be tracked without any squaring loss. This article demonstrates that, with some modification, the same idea of iterative information reduction through decision feedback can be applied to quadrature phase-shift keyed (QPSK) modulation, something that was mentioned in the previous publication but never pursued.

  3. Symmetry-breaking inelastic wave-mixing atomic magnetometry

    PubMed Central

    Zhou, Feng; Zhu, Chengjie J.; Hagley, Edward W.; Deng, Lu

    2017-01-01

    The nonlinear magneto-optical rotation (NMOR) effect has prolific applications ranging from precision mapping of Earth’s magnetic field to biomagnetic sensing. Studies on collisional spin relaxation effects have led to ultrahigh magnetic field sensitivities using a single-beam Λ scheme with state-of-the-art magnetic shielding/compensation techniques. However, the NMOR effect in this widely used single-beam Λ scheme is peculiarly small, requiring complex radio-frequency phase-locking protocols. We show the presence of a previously unknown energy symmetry–based nonlinear propagation blockade and demonstrate an optical inelastic wave-mixing NMOR technique that breaks this NMOR blockade, resulting in an NMOR optical signal-to-noise ratio (SNR) enhancement of more than two orders of magnitude never before seen with the single-beam Λ scheme. The large SNR enhancement was achieved simultaneously with a nearly two orders of magnitude reduction in laser power while preserving the magnetic resonance linewidth. This new method may open a myriad of applications ranging from biomagnetic imaging to precision measurement of the magnetic properties of subatomic particles. PMID:29214217

  4. The 'Real Welfare' scheme: benchmarking welfare outcomes for commercially farmed pigs.

    PubMed

    Pandolfi, F; Stoddart, K; Wainwright, N; Kyriazakis, I; Edwards, S A

    2017-10-01

    Animal welfare standards have been incorporated in EU legislation and in farm assurance schemes, based on scientific information and aiming to safeguard the welfare of the species concerned. Recently, emphasis has shifted from resource-based measures of welfare to animal-based measures, which are considered to assess more accurately the welfare status. The data used in this analysis were collected from April 2013 to May 2016 through the 'Real Welfare' scheme in order to assess on-farm pig welfare, as required for those finishing pigs under the UK Red Tractor Assurance scheme. The assessment involved five main measures (percentage of pigs requiring hospitalization, percentage of lame pigs, percentage of pigs with severe tail lesions, percentage of pigs with severe body marks and enrichment use ratio) and optional secondary measures (percentage of pigs with mild tail lesions, percentage of pigs with dirty tails, percentage of pigs with mild body marks, percentage of pigs with dirty bodies), with associated information about the environment and the enrichment in the farms. For the complete database, a sample of pens was assessed from 1928 farm units. Repeated measures were taken in the same farm unit over time, giving 112 240 records at pen level. These concerned a total of 13 480 289 pigs present on the farm during the assessments, with 5 463 348 pigs directly assessed using the 'Real Welfare' protocol. The three most common enrichment types were straw, chain and plastic objects. The main substrate was straw which was present in 67.9% of the farms. Compared with 2013, a significant increase of pens with undocked-tail pigs, substrates and objects was observed over time (P0.3). The results from the first 3 years of the scheme demonstrate a reduction of the prevalence of animal-based measures of welfare problems and highlight the value of this initiative.

  5. Limited effect of anthropogenic nitrogen oxides on secondary organic aerosol formation

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Unger, N.; Hodzic, A.; Emmons, L.; Knote, C.; Tilmes, S.; Lamarque, J.-F.; Yu, P.

    2015-12-01

    Globally, secondary organic aerosol (SOA) is mostly formed from emissions of biogenic volatile organic compounds (VOCs) by vegetation, but it can be modified by human activities as demonstrated in recent research. Specifically, nitrogen oxides (NOx = NO + NO2) have been shown to play a critical role in the chemical formation of low volatility compounds. We have updated the SOA scheme in the global NCAR (National Center for Atmospheric Research) Community Atmospheric Model version 4 with chemistry (CAM4-chem) by implementing a 4-product volatility basis set (VBS) scheme, including NOx-dependent SOA yields and aging parameterizations. Small differences are found for the no-aging VBS and 2-product schemes; large increases in SOA production and the SOA-to-OA ratio are found for the aging scheme. The predicted organic aerosol amounts capture both the magnitude and distribution of US surface annual mean measurements from the Interagency Monitoring of Protected Visual Environments (IMPROVE) network by 50 %, and the simulated vertical profiles are within a factor of 2 compared to aerosol mass spectrometer (AMS) measurements from 13 aircraft-based field campaigns across different regions and seasons. We then perform sensitivity experiments to examine how the SOA loading responds to a 50 % reduction in anthropogenic nitric oxide (NO) emissions in different regions. We find limited SOA reductions of 0.9-5.6, 6.4-12.0 and 0.9-2.8 % for global, southeast US and Amazon NOx perturbations, respectively. The fact that SOA formation is almost unaffected by changes in NOx can be largely attributed to a limited shift in chemical regime, to buffering in chemical pathways (low- and high-NOx pathways, O3 versus NO3-initiated oxidation) and to offsetting tendencies in the biogenic versus anthropogenic SOA responses.

  6. Positive Psychology for Overcoming Symptoms of Depression: A Pilot Study Exploring the Efficacy of a Positive Psychology Self-Help Book versus a CBT Self-Help Book.

    PubMed

    Hanson, Katie

    2018-04-25

    Depression is an extremely common mental health disorder, with prevalence rates rising. Low-intensity interventions are frequently used to help meet the demand for treatment. Bibliotherapy, for example, is often prescribed via books on prescription schemes (for example 'Reading Well' in England) to those with mild to moderate symptomology. Bibliotherapy can effectively reduce symptoms of depression (Naylor et al., 2010). However, the majority of self-help books are based on cognitive behavioural therapy (CBT), which may not be suitable for all patients. Research supports the use of positive psychology interventions for the reduction of depression symptoms (Bolier et al., 2013) and as such self-help books from this perspective should be empirically tested. This study aimed to test the efficacy of 'Positive Psychology for Overcoming Depression' (Akhtar, 2012), a self-help book for depression that is based on the principles of positive psychology, in comparison with a CBT self-help book that is currently prescribed in England as part of the Reading Well books on prescription scheme. Participants (n = 115) who were not receiving treatment, but had symptoms of depression, read the positive psychology or the CBT self-help book for 8 weeks. Depression and well-being were measured at baseline, post-test and 1-month follow-up. Results suggest that both groups experienced a reduction in depression and an increase in well-being, with no differences noted between the two books. Future directions are discussed in terms of dissemination, to those with mild to moderate symptoms of depression, via books on prescription schemes.

  7. Energy efficiency analysis of two-sided feed scheme of DC traction network with high asymmetry of feeders parameters

    NASA Astrophysics Data System (ADS)

    Abramov, E. Y.; Sopov, V. I.

    2017-10-01

    In a given research using the example of traction network area with high asymmetry of power supply parameters, the sequence of comparative assessment of power losses in DC traction network with parallel and traditional separated operating modes of traction substation feeders was shown. Experimental measurements were carried out under these modes of operation. The calculation data results based on statistic processing showed the power losses decrease in contact network and the increase in feeders. The changes proved to be critical ones and this demonstrates the significance of potential effects when converting traction network areas into parallel feeder operation. An analytical method of calculation the average power losses for different feed schemes of the traction network was developed. On its basis, the dependences of the relative losses were obtained by varying the difference in feeder voltages. The calculation results showed unreasonableness transition to a two-sided feed scheme for the considered traction network area. A larger reduction in the total power loss can be obtained with a smaller difference of the feeders’ resistance and / or a more symmetrical sectioning scheme of contact network.

  8. Government health insurance for people below poverty line in India: quasi-experimental evaluation of insurance and health outcomes.

    PubMed

    Sood, Neeraj; Bendavid, Eran; Mukherji, Arnab; Wagner, Zachary; Nagpal, Somil; Mullen, Patrick

    2014-09-11

    To evaluate the effects of a government insurance program covering tertiary care for people below the poverty line in Karnataka, India, on out-of-pocket expenditures, hospital use, and mortality. Geographic regression discontinuity study. 572 villages in Karnataka, India. 31,476 households (22,796 below poverty line and 8680 above poverty line) in 300 villages where the scheme was implemented and 28,633 households (21,767 below poverty line and 6866 above poverty line) in 272 neighboring matched villages ineligible for the scheme. A government insurance program (Vajpayee Arogyashree scheme) that provided free tertiary care to households below the poverty line in about half of villages in Karnataka from February 2010 to August 2012. Out-of-pocket expenditures, hospital use, and mortality. Among households below the poverty line, the mortality rate from conditions potentially responsive to services covered by the scheme (mostly cardiac conditions and cancer) was 0.32% in households eligible for the scheme compared with 0.90% among ineligible households just south of the eligibility border (difference of 0.58 percentage points, 95% confidence interval 0.40 to 0.75; P<0.001). We found no difference in mortality rates for households above the poverty line (households above the poverty line were not eligible for the scheme), with a mortality rate from conditions covered by the scheme of 0.56% in eligible villages compared with 0.55% in ineligible villages (difference of 0.01 percentage points, -0.03 to 0.03; P=0.95). Eligible households had significantly reduced out-of-pocket health expenditures for admissions to hospitals with tertiary care facilities likely to be covered by the scheme (64% reduction, 35% to 97%; P<0.001). There was no significant increase in use of covered services, although the point estimate of a 44.2% increase approached significance (-5.1% to 90.5%; P=0.059). Both reductions in out-of-pocket expenditures and potential increases in use might have contributed to the observed reductions in mortality. Insuring poor households for efficacious but costly and underused health services significantly improves population health in India. © Sood et al 2014.

  9. A Laplacian based image filtering using switching noise detector.

    PubMed

    Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar

    2015-01-01

    This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.

  10. Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2015-11-01

    The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.

  11. [Nursing homes and fall in the consumption of psychotropic medications].

    PubMed

    Giet, Régis; Bonet, Claudine

    The consumption of psychotropic drugs in elderly people remains a concern in France, including in nursing homes. A comparative analysis of prescriptions for psychotropic medication in nursing homes in 2013 and 2015 based on the computer system of the French national health insurance scheme shows a significant reduction in the prescribing of these medications. Example of a nursing home in Dijon. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  12. Energetic valorization of wood waste: estimation of the reduction in CO2 emissions.

    PubMed

    Vanneste, J; Van Gerven, T; Vander Putten, E; Van der Bruggen, B; Helsen, L

    2011-09-01

    This paper investigates the potential CO(2) emission reductions related to a partial switch from fossil fuel-based heat and electricity generation to renewable wood waste-based systems in Flanders. The results show that valorization in large-scale CHP (combined heat and power) systems and co-firing in coal plants have the largest CO(2) reduction per TJ wood waste. However, at current co-firing rates of 10%, the CO(2) reduction per GWh of electricity that can be achieved by co-firing in coal plants is five times lower than the CO(2) reduction per GWh of large-scale CHP. Moreover, analysis of the effect of government support for co-firing of wood waste in coal-fired power plants on the marginal costs of electricity generation plants reveals that the effect of the European Emission Trading Scheme (EU ETS) is effectively counterbalanced. This is due to the fact that biomass integrated gasification combined cycles (BIGCC) are not yet commercially available. An increase of the fraction of coal-based electricity in the total electricity generation from 8 to 10% at the expense of the fraction of gas-based electricity due to the government support for co-firing wood waste, would compensate entirely for the CO(2) reduction by substitution of coal by wood waste. This clearly illustrates the possibility of a 'rebound' effect on the CO(2) reduction due to government support for co-combustion of wood waste in an electricity generation system with large installed capacity of coal- and gas-based power plants, such as the Belgian one. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Fast modal extraction in NASTRAN via the FEER computer program. [based on automatic matrix reduction method for lower modes of structures with many degrees of freedom

    NASA Technical Reports Server (NTRS)

    Newman, M. B.; Pipano, A.

    1973-01-01

    A new eigensolution routine, FEER (Fast Eigensolution Extraction Routine), used in conjunction with NASTRAN at Israel Aircraft Industries is described. The FEER program is based on an automatic matrix reduction scheme whereby the lower modes of structures with many degrees of freedom can be accurately extracted from a tridiagonal eigenvalue problem whose size is of the same order of magnitude as the number of required modes. The process is effected without arbitrary lumping of masses at selected node points or selection of nodes to be retained in the analysis set. The results of computational efficiency studies are presented, showing major arithmetic operation counts and actual computer run times of FEER as compared to other methods of eigenvalue extraction, including those available in the NASTRAN READ module. It is concluded that the tridiagonal reduction method used in FEER would serve as a valuable addition to NASTRAN for highly increased efficiency in obtaining structural vibration modes.

  14. Does the financial protection of health insurance vary across providers? Vietnam's experience.

    PubMed

    Sepehri, Ardeshir; Sarma, Sisira; Oguzoglu, Umut

    2011-08-01

    Using household panel data from Vietnam, this paper compares out-of-pocket health expenditures on outpatient care at a health facility between insured and uninsured patients as well as across various providers. In the random effects model, the estimated coefficient of the insurance status variable suggests that insurance reduces out-of-pocket spending by 24% for those with the compulsory and voluntary coverage and by about 15% for those with the health insurance for the poor coverage. However, the modest financial protection of the compulsory and voluntary schemes disappears once we control for time-invariant unobserved individual effects using the fixed effects model. Additional analysis of the interaction terms involving the type of insurance and health facility suggests that the overall insignificant reduction in out-of-pocket expenditures as a result of the insurance schemes masks wide variations in the reduction in out-of-pocket sending across various providers. Insurance reduces out-of-pocket expenditures more for those enrollees using district and higher level public health facilities than those using commune health centers. Compared to the uninsured patients using district hospitals, compulsory and voluntary insurance schemes reduce out-of-pocket expenditures by 40 and 32%, respectively. However, for contacts at the commune health centers, both the compulsory health scheme and the voluntary health insurance scheme schemes have little influence on out-of-pocket spending while the health insurance scheme for the poor reduces out-of-pocket spending by about 15%. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Living with a large reduction in permited loading by using a hydrograph-controlled release scheme

    USGS Publications Warehouse

    Conrads, P.A.; Martello, W.P.; Sullins, N.R.

    2003-01-01

    The Total Maximum Daily Load (TMDL) for ammonia and biochemical oxygen demand for the Pee Dee, Waccamaw, and Atlantic Intracoastal Waterway system near Myrtle Beach, South Carolina, mandated a 60-percent reduction in point-source loading. For waters with a naturally low background dissolved-oxygen concentrations, South Carolina anti-degradation rules in the water-quality regulations allows a permitted discharger a reduction of dissolved oxygen of 0.1 milligrams per liter (mg/L). This is known as the "0.1 rule." Permitted dischargers within this region of the State operate under the "0.1 rule" and cannot cause a cumulative impact greater than 0.1 mg/L on dissolved-oxygen concentrations. For municipal water-reclamation facilities to serve the rapidly growing resort and retirement community near Myrtle Beach, a variable loading scheme was developed to allow dischargers to utilize increased assimilative capacity during higher streamflow conditions while still meeting the requirements of a recently established TMDL. As part of the TMDL development, an extensive real-time data-collection network was established in the lower Waccamaw and Pee Dee River watershed where continuous measurements of streamflow, water level, dissolved oxygen, temperature, and specific conductance are collected. In addition, the dynamic BRANCH/BLTM models were calibrated and validated to simulate the water quality and tidal dynamics of the system. The assimilative capacities for various streamflows were also analyzed. The variable-loading scheme established total loadings for three streamflow levels. Model simulations show the results from the additional loading to be less than a 0.1 mg/L reduction in dissolved oxygen. As part of the loading scheme, the real-time network was redesigned to monitor streamflow entering the study area and water-quality conditions in the location of dissolved-oxygen "sags." The study reveals how one group of permit holders used a variable-loading scheme to implement restrictive permit limits without experiencing prohibitive capital expenditures or initiating a lengthy appeals process.

  16. The opportunity of silicate product manufacturing with simultaneous pig iron reduction from slag technogenic formations

    NASA Astrophysics Data System (ADS)

    Sheshukov, O. Yu.; Lobanov, D. A.; Mikheenkov, M. A.; Nekrasov, I. V.; Egiazaryan, D. K.

    2017-09-01

    There are two main kinds of slag in modern steelmaking industry: the electric arc furnace slag (EAF slag) and ladle furnace slag (LF slag). The all known slag processing schemes provide the iron-containing component reduction while silicate component stays unprocessed. On the contrary, the silicate processing schemes doesn't provide the utilization of the iron-containing component. The present-day situation doesn't solve the problem of total slag utilization. The aim of this work is to investigate the opportunity of silicate product obtaining with simultaneous pig iron reduction from EAF and LF slags. The tests are conducted by the method of simplex-lattice design. The test samples are heated and melted under reductive conditions, slowly cooled and then analyzed by XRD methods. The experiment results prove the opportunity: the Portland clinker and pig iron can be simultaneously produced on the basis of these slags with a limestone addition.

  17. Towards a First-Principles Determination of Effective Coulomb Interactions in Correlated Electron Materials: Role of Intershell Interactions

    NASA Astrophysics Data System (ADS)

    Seth, Priyanka; Hansmann, Philipp; van Roekeghem, Ambroise; Vaugier, Loig; Biermann, Silke

    2017-08-01

    The determination of the effective Coulomb interactions to be used in low-energy Hamiltonians for materials with strong electronic correlations remains one of the bottlenecks for parameter-free electronic structure calculations. We propose and benchmark a scheme for determining the effective local Coulomb interactions for charge-transfer oxides and related compounds. Intershell interactions between electrons in the correlated shell and ligand orbitals are taken into account in an effective manner, leading to a reduction of the effective local interactions on the correlated shell. Our scheme resolves inconsistencies in the determination of effective interactions as obtained by standard methods for a wide range of materials, and allows for a conceptual understanding of the relation of cluster model and dynamical mean field-based electronic structure calculations.

  18. Towards a First-Principles Determination of Effective Coulomb Interactions in Correlated Electron Materials: Role of Intershell Interactions.

    PubMed

    Seth, Priyanka; Hansmann, Philipp; van Roekeghem, Ambroise; Vaugier, Loig; Biermann, Silke

    2017-08-04

    The determination of the effective Coulomb interactions to be used in low-energy Hamiltonians for materials with strong electronic correlations remains one of the bottlenecks for parameter-free electronic structure calculations. We propose and benchmark a scheme for determining the effective local Coulomb interactions for charge-transfer oxides and related compounds. Intershell interactions between electrons in the correlated shell and ligand orbitals are taken into account in an effective manner, leading to a reduction of the effective local interactions on the correlated shell. Our scheme resolves inconsistencies in the determination of effective interactions as obtained by standard methods for a wide range of materials, and allows for a conceptual understanding of the relation of cluster model and dynamical mean field-based electronic structure calculations.

  19. ONU Power Saving Scheme for EPON System

    NASA Astrophysics Data System (ADS)

    Mukai, Hiroaki; Tano, Fumihiko; Tanaka, Masaki; Kozaki, Seiji; Yamanaka, Hideaki

    PON (Passive Optical Network) achieves FTTH (Fiber To The Home) economically, by sharing an optical fiber among plural subscribers. Recently, global climate change has been recognized as a serious near term problem. Power saving techniques for electronic devices are important. In PON system, the ONU (Optical Network Unit) power saving scheme has been studied and defined in XG-PON. In this paper, we propose an ONU power saving scheme for EPON. Then, we present an analysis of the power reduction effect and the data transmission delay caused by the ONU power saving scheme. According to the analysis, we propose an efficient provisioning method for the ONU power saving scheme which is applicable to both of XG-PON and EPON.

  20. Construction of an all-solid-state artificial Z-scheme system consisting of Bi2WO6/Au/CdS nanostructure for photocatalytic CO2 reduction into renewable hydrocarbon fuel.

    PubMed

    Wang, Meng; Han, Qiutong; Li, Liang; Tang, Lanqin; Li, Haijin; Zhou, Yong; Zou, Zhigang

    2017-07-07

    An all-solid-state Bi 2 WO 6 /Au/CdS Z-scheme system was constructed for the photocatalytic reduction of CO 2 into methane in the presence of water vapor. This Z-scheme consists of ultrathin Bi 2 WO 6 nanoplates and CdS nanoparticles as photocatalysts, and a Au nanoparticle as a solid electron mediator offering a high speed charge transfer channel and leading to more efficient spatial separation of electron-hole pairs. The photo-generated electrons from the conduction band (CB) of Bi 2 WO 6 transfer to the Au, and then release to the valence band (VB) of CdS to recombine with the holes of CdS. It allows the electrons remaining in the CB of CdS and holes in the VB of Bi 2 WO 6 to possess strong reduction and oxidation powers, respectively, leading the Bi 2 WO 6 /Au/CdS to exhibit high photocatalytic reduction of CO 2 , relative to bare Bi 2 WO 6 , Bi 2 WO 6 /Au, and Bi 2 WO 6 /CdS. The depressed hole density on CdS also enhances the stability of the CdS against photocorrosion.

  1. Collaborative Emission Reduction Model Based on Multi-Objective Optimization for Greenhouse Gases and Air Pollutants.

    PubMed

    Meng, Qing-chun; Rong, Xiao-xia; Zhang, Yi-min; Wan, Xiao-le; Liu, Yuan-yuan; Wang, Yu-zhi

    2016-01-01

    CO2 emission influences not only global climate change but also international economic and political situations. Thus, reducing the emission of CO2, a major greenhouse gas, has become a major issue in China and around the world as regards preserving the environmental ecology. Energy consumption from coal, oil, and natural gas is primarily responsible for the production of greenhouse gases and air pollutants such as SO2 and NOX, which are the main air pollutants in China. In this study, a mathematical multi-objective optimization method was adopted to analyze the collaborative emission reduction of three kinds of gases on the basis of their common restraints in different ways of energy consumption to develop an economic, clean, and efficient scheme for energy distribution. The first part introduces the background research, the collaborative emission reduction for three kinds of gases, the multi-objective optimization, the main mathematical modeling, and the optimization method. The second part discusses the four mathematical tools utilized in this study, which include the Granger causality test to analyze the causality between air quality and pollutant emission, a function analysis to determine the quantitative relation between energy consumption and pollutant emission, a multi-objective optimization to set up the collaborative optimization model that considers energy consumption, and an optimality condition analysis for the multi-objective optimization model to design the optimal-pole algorithm and obtain an efficient collaborative reduction scheme. In the empirical analysis, the data of pollutant emission and final consumption of energies of Tianjin in 1996-2012 was employed to verify the effectiveness of the model and analyze the efficient solution and the corresponding dominant set. In the last part, several suggestions for collaborative reduction are recommended and the drawn conclusions are stated.

  2. Collaborative Emission Reduction Model Based on Multi-Objective Optimization for Greenhouse Gases and Air Pollutants

    PubMed Central

    Zhang, Yi-min; Wan, Xiao-le; Liu, Yuan-yuan; Wang, Yu-zhi

    2016-01-01

    CO2 emission influences not only global climate change but also international economic and political situations. Thus, reducing the emission of CO2, a major greenhouse gas, has become a major issue in China and around the world as regards preserving the environmental ecology. Energy consumption from coal, oil, and natural gas is primarily responsible for the production of greenhouse gases and air pollutants such as SO2 and NOX, which are the main air pollutants in China. In this study, a mathematical multi-objective optimization method was adopted to analyze the collaborative emission reduction of three kinds of gases on the basis of their common restraints in different ways of energy consumption to develop an economic, clean, and efficient scheme for energy distribution. The first part introduces the background research, the collaborative emission reduction for three kinds of gases, the multi-objective optimization, the main mathematical modeling, and the optimization method. The second part discusses the four mathematical tools utilized in this study, which include the Granger causality test to analyze the causality between air quality and pollutant emission, a function analysis to determine the quantitative relation between energy consumption and pollutant emission, a multi-objective optimization to set up the collaborative optimization model that considers energy consumption, and an optimality condition analysis for the multi-objective optimization model to design the optimal-pole algorithm and obtain an efficient collaborative reduction scheme. In the empirical analysis, the data of pollutant emission and final consumption of energies of Tianjin in 1996–2012 was employed to verify the effectiveness of the model and analyze the efficient solution and the corresponding dominant set. In the last part, several suggestions for collaborative reduction are recommended and the drawn conclusions are stated. PMID:27010658

  3. Kinetics and mechanisms of 1,5-dihydroflavin reduction of carbonyl compounds and flavin oxidation of alcohols. III. Oxidation of benzoin by flavin and reduction of benzil by 1,5-dihydroflavin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruice, T.C.; Taulane, J.P.

    1976-11-24

    The oxidation of benzoin by lumiflavin-3-acetic acid (Fl/sub ox/) to provide benzil and 1,5-dihydrolumiflavin-3-acetic acid (FlH/sub 2/) is a readily reversible reaction. It has been established that the mechanism involves general base ionization of benzoin carbon acid (..cap alpha..-ketol) to yield endiolate anion, followed by partitioning of the endiolate anion back to benzoin through general acid proton donation and to benzil by reaction with Fl/sub ox/. The reaction of endiolate anion with Fl/sub ox/ is not subject to acid or base catalysis. Evidence that ionization of benzoin precedes its oxidation by Fl/sub ox/ stems from the observation that the ratemore » attributed to the latter process possesses a constant equal to that for racemization of (+)-benzoin and O/sub 2/ oxidation of benzoin and that this rate constant is characterized by a primary deuterium kinetic isotope effect (k/sup benzoin//k/sup ..cap alpha..-/sup 2/H-benzoin/) of 7.24 +- 1.5. Reduction of benzil to benzoin by FlH/sub 2/ is pH and buffer insensitive below the pK/sub a/ of FlH/sub 2/. These results are consistent with either general acid catalyzed attack of benzoin carbanion at the 4a-position of Fl/sub ox/, followed by a specific base catalyzed collapse of adduct to diketone and dihydroflavin (Scheme III), or to the uncatalyzed reaction of carbanion (endiolate anion) with flavin to provide a semidione-flavin radical pair which then goes on to diketone and dihydroflavin in a non-acid-base catalyzed reaction (Scheme V). These mechanisms are discussed in terms of the kinetics of reaction of other carbanion species with flavin.« less

  4. Analytic integration of real-virtual counterterms in NNLO jet cross sections I

    NASA Astrophysics Data System (ADS)

    Aglietti, Ugo; Del Duca, Vittorio; Duhr, Claude; Somogyi, Gábor; Trócsányi, Zoltán

    2008-09-01

    We present analytic evaluations of some integrals needed to give explicitly the integrated real-virtual counterterms, based on a recently proposed subtraction scheme for next-to-next-to-leading order (NNLO) jet cross sections. After an algebraic reduction of the integrals, integration-by-parts identities are used for the reduction to master integrals and for the computation of the master integrals themselves by means of differential equations. The results are written in terms of one- and two-dimensional harmonic polylogarithms, once an extension of the standard basis is made. We expect that the techniques described here will be useful in computing other integrals emerging in calculations in perturbative quantum field theories.

  5. Sensitivity of CAM5-simulated Arctic clouds and radiation to ice nucleation parameterization

    DOE PAGES

    Xie, Shaocheng; Liu, Xiaohong; Zhao, Chuanfeng; ...

    2013-08-06

    Sensitivity of Arctic clouds and radiation in the Community Atmospheric Model, version 5, to the ice nucleation process is examined by testing a new physically based ice nucleation scheme that links the variation of ice nuclei (IN) number concentration to aerosol properties. The default scheme parameterizes the IN concentration simply as a function of ice supersaturation. The new scheme leads to a significant reduction in simulated IN concentration at all latitudes while changes in cloud amounts and properties are mainly seen at high- and midlatitude storm tracks. In the Arctic, there is a considerable increase in midlevel clouds and amore » decrease in low-level clouds, which result from the complex interaction among the cloud macrophysics, microphysics, and large-scale environment. The smaller IN concentrations result in an increase in liquid water path and a decrease in ice water path caused by the slowdown of the Bergeron–Findeisen process in mixed-phase clouds. Overall, there is an increase in the optical depth of Arctic clouds, which leads to a stronger cloud radiative forcing (net cooling) at the top of the atmosphere. The comparison with satellite data shows that the new scheme slightly improves low-level cloud simulations over most of the Arctic but produces too many midlevel clouds. Considerable improvements are seen in the simulated low-level clouds and their properties when compared with Arctic ground-based measurements. As a result, issues with the observations and the model–observation comparison in the Arctic region are discussed.« less

  6. Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, Bjoern; Yee, Helen C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great challenge to algorithm development. In addition, controlling the numerical error of the divergence free condition of the magnetic fields for high order methods has been a stumbling block. Lower order methods are not practical for the astrophysical problems in question. We propose to extend our hydrodynamics schemes to the MHD equations with several desired properties over commonly used MHD schemes.

  7. Experimental validation of a Monte-Carlo-based inversion scheme for 3D quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Buchmann, Jens; Kaplan, Bernhard A.; Prohaska, Steffen; Laufer, Jan

    2017-03-01

    Quantitative photoacoustic tomography (qPAT) aims to extract physiological parameters, such as blood oxygen saturation (sO2), from measured multi-wavelength image data sets. The challenge of this approach lies in the inherently nonlinear fluence distribution in the tissue, which has to be accounted for by using an appropriate model, and the large scale of the inverse problem. In addition, the accuracy of experimental and scanner-specific parameters, such as the wavelength dependence of the incident fluence, the acoustic detector response, the beam profile and divergence, needs to be considered. This study aims at quantitative imaging of blood sO2, as it has been shown to be a more robust parameter compared to absolute concentrations. We propose a Monte-Carlo-based inversion scheme in conjunction with a reduction in the number of variables achieved using image segmentation. The inversion scheme is experimentally validated in tissue-mimicking phantoms consisting of polymer tubes suspended in a scattering liquid. The tubes were filled with chromophore solutions at different concentration ratios. 3-D multi-spectral image data sets were acquired using a Fabry-Perot based PA scanner. A quantitative comparison of the measured data with the output of the forward model is presented. Parameter estimates of chromophore concentration ratios were found to be within 5 % of the true values.

  8. Comparison in Schemes for Simulating Depositional Growth of Ice Crystal between Theoretical and Laboratory Data

    NASA Astrophysics Data System (ADS)

    Zhai, Guoqing; Li, Xiaofan

    2015-04-01

    The Bergeron-Findeisen process has been simulated using the parameterization scheme for the depositional growth of ice crystal with the temperature-dependent theoretically predicted parameters in the past decades. Recently, Westbrook and Heymsfield (2011) calculated these parameters using the laboratory data from Takahashi and Fukuta (1988) and Takahashi et al. (1991) and found significant differences between the two parameter sets. There are two schemes that parameterize the depositional growth of ice crystal: Hsie et al. (1980), Krueger et al. (1995) and Zeng et al. (2008). In this study, we conducted three pairs of sensitivity experiments using three parameterization schemes and the two parameter sets. The pre-summer torrential rainfall event is chosen as the simulated rainfall case in this study. The analysis of root-mean-squared difference and correlation coefficient between the simulation and observation of surface rain rate shows that the experiment with the Krueger scheme and the Takahashi laboratory-derived parameters produces the best rain-rate simulation. The mean simulated rain rates are higher than the mean observational rain rate. The calculations of 5-day and model domain mean rain rates reveal that the three schemes with Takahashi laboratory-derived parameters tend to reduce the mean rain rate. The Krueger scheme together with the Takahashi laboratory-derived parameters generate the closest mean rain rate to the mean observational rain rate. The decrease in the mean rain rate caused by the Takahashi laboratory-derived parameters in the experiment with the Krueger scheme is associated with the reductions in the mean net condensation and the mean hydrometeor loss. These reductions correspond to the suppressed mean infrared radiative cooling due to the enhanced cloud ice and snow in the upper troposphere.

  9. Understanding Parental Stress within the Scallywags Service for Children with Emotional and Behavioural Difficulties

    ERIC Educational Resources Information Center

    Broadhead, Moira; Chilton, Roy; Crichton, Catriona

    2009-01-01

    The Scallywags service works specifically within home and school environments to promote parent, teacher and child competencies for children at risk of developing behavioural and/or emotional problems. The scheme has been successfully evaluated, demonstrating significant reductions in parental stress for parents involved in the scheme. This paper…

  10. CSI Feedback Reduction for MIMO Interference Alignment

    NASA Astrophysics Data System (ADS)

    Rao, Xiongbin; Ruan, Liangzhong; Lau, Vincent K. N.

    2013-09-01

    Interference alignment (IA) is a linear precoding strategy that can achieve optimal capacity scaling at high SNR in interference networks. Most of the existing IA designs require full channel state information (CSI) at the transmitters, which induces a huge CSI signaling cost. Hence it is desirable to improve the feedback efficiency for IA and in this paper, we propose a novel IA scheme with a significantly reduced CSI feedback. To quantify the CSI feedback cost, we introduce a novel metric, namely the feedback dimension. This metric serves as a first-order measurement of CSI feedback overhead. Due to the partial CSI feedback constraint, conventional IA schemes can not be applied and hence, we develop a novel IA precoder / decorrelator design and establish new IA feasibility conditions. Via dynamic feedback profile design, the proposed IA scheme can also achieve a flexible tradeoff between the degree of freedom (DoF) requirements for data streams, the antenna resources and the CSI feedback cost. We show by analysis and simulations that the proposed scheme achieves substantial reductions of CSI feedback overhead under the same DoF requirement in MIMO interference networks.

  11. Upon Generating (2+1)-dimensional Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yufeng; Bai, Yang; Wu, Lixin

    2016-06-01

    Under the framework of the Adler-Gel'fand-Dikii(AGD) scheme, we first propose two Hamiltonian operator pairs over a noncommutative ring so that we construct a new dynamical system in 2+1 dimensions, then we get a generalized special Novikov-Veselov (NV) equation via the Manakov triple. Then with the aid of a special symmetric Lie algebra of a reductive homogeneous group G, we adopt the Tu-Andrushkiw-Huang (TAH) scheme to generate a new integrable (2+1)-dimensional dynamical system and its Hamiltonian structure, which can reduce to the well-known (2+1)-dimensional Davey-Stewartson (DS) hierarchy. Finally, we extend the binormial residue representation (briefly BRR) scheme to the super higher dimensional integrable hierarchies with the help of a super subalgebra of the super Lie algebra sl(2/1), which is also a kind of symmetric Lie algebra of the reductive homogeneous group G. As applications, we obtain a super 2+1 dimensional MKdV hierarchy which can be reduced to a super 2+1 dimensional generalized AKNS equation. Finally, we compare the advantages and the shortcomings for the three schemes to generate integrable dynamical systems.

  12. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    2015-06-15

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less

  13. Energy and Quality Evaluation for Compressive Sensing of Fetal Electrocardiogram Signals

    PubMed Central

    Da Poian, Giulia; Brandalise, Denis; Bernardini, Riccardo; Rinaldo, Roberto

    2016-01-01

    This manuscript addresses the problem of non-invasive fetal Electrocardiogram (ECG) signal acquisition with low power/low complexity sensors. A sensor architecture using the Compressive Sensing (CS) paradigm is compared to a standard compression scheme using wavelets in terms of energy consumption vs. reconstruction quality, and, more importantly, vs. performance of fetal heart beat detection in the reconstructed signals. We show in this paper that a CS scheme based on reconstruction with an over-complete dictionary has similar reconstruction quality to one based on wavelet compression. We also consider, as a more important figure of merit, the accuracy of fetal beat detection after reconstruction as a function of the sensor power consumption. Experimental results with an actual implementation in a commercial device show that CS allows significant reduction of energy consumption in the sensor node, and that the detection performance is comparable to that obtained from original signals for compression ratios up to about 75%. PMID:28025510

  14. Asymmetric dual-loop feedback to suppress spurious tones and reduce timing jitter in self-mode-locked quantum-dash lasers emitting at 155 μm

    NASA Astrophysics Data System (ADS)

    Asghar, Haroon; McInerney, John G.

    2017-09-01

    We demonstrate an asymmetric dual-loop feedback scheme to suppress external cavity side-modes induced in self-mode-locked quantum-dash lasers with conventional single and dual-loop feedback. In this letter, we achieved optimal suppression of spurious tones by optimizing the length of second delay time. We observed that asymmetric dual-loop feedback, with large (~8x) disparity in cavity lengths, eliminates all external-cavity side-modes and produces flat RF spectra close to the main peak with low timing jitter compared to single-loop feedback. Significant reduction in RF linewidth and reduced timing jitter was also observed as a function of increased second feedback delay time. The experimental results based on this feedback configuration validate predictions of recently published numerical simulations. This interesting asymmetric dual-loop feedback scheme provides simplest, efficient and cost effective stabilization of side-band free optoelectronic oscillators based on mode-locked lasers.

  15. Methodology of ecooriented assessment of constructive schemes of cast in-situ RC framework in civil engineering

    NASA Astrophysics Data System (ADS)

    Avilova, I. P.; Krutilova, M. O.

    2018-01-01

    Economic growth is the main determinant of the trend to increased greenhouse gas (GHG) emission. Therefore, the reduction of emission and stabilization of GHG levels in the atmosphere become an urgent task to avoid the worst predicted consequences of climate change. GHG emissions in construction industry take a significant part of industrial GHG emission and are expected to consistently increase. The problem could be successfully solved with a help of both economical and organizational restrictions, based on enhanced algorithms of calculation and amercement of environmental harm in building industry. This study aims to quantify of GHG emission caused by different constructive schemes of RC framework in concrete casting. The result shows that proposed methodology allows to make a comparative analysis of alternative projects in residential housing, taking into account an environmental damage, caused by construction process. The study was carried out in the framework of the Program of flagship university development on the base of Belgorod State Technological University named after V.G. Shoukhov

  16. Women’s preferences for alternative financial incentive schemes for breastfeeding: A discrete choice experiment

    PubMed Central

    Anokye, Nana; de Bekker-Grob, Esther W.; Higgins, Ailish; Relton, Clare; Strong, Mark; Fox-Rushby, Julia

    2018-01-01

    Background Increasing breastfeeding rates have been associated with reductions in disease in babies and mothers as well as in related costs. ‘Nourishing Start for Health (NoSH)’, a financial incentive scheme has been proposed as a potentially effective way to increase both the number of mothers breastfeeding and duration of breastfeeding. Aims To establish women’s relative preferences for different aspects of a financial incentive scheme for breastfeeding and to identify importance of scheme characteristics on probability on participation in an incentive scheme. Methods A discrete choice experiment (DCE) obtained information on alternative specifications of the NoSH scheme designed to promote continued breastfeeding duration until at least 6 weeks after birth. Four attributes framed alternative scheme designs: value of the incentive; minimum breastfeeding duration required to receive incentive; method of verifying breastfeeding; type of incentive. Three versions of the DCE questionnaire, each containing 8 different choice sets, provided 24 choice sets for analysis. The questionnaire was mailed to 2,531 women in the South Yorkshire Cohort (SYC) aged 16–45 years in IMD quintiles 3–5. The analytic approach considered conditional and mixed effects logistic models to account for preference heterogeneity that may be associated with a variation in effects mediated by respondents’ characteristics. Results 564 women completed the questionnaire and a response rate of 22% was achieved. Most of the included attributes were found to affect utility and therefore the probability to participate in the incentive scheme. Higher rewards were preferred, although the type of incentive significantly affected women’s preferences on average. We found evidence for preference heterogeneity based on individual characteristics that mediated preferences for an incentive scheme.Conclusions Although participants’ opinion in our sample was mixed, financial incentives for breastfeeding may be an acceptable and effective instrument to change behaviour. However, individual characteristics could mediate the effect and should therefore be considered when developing and targeting future interventions. PMID:29649245

  17. On-board closed-loop congestion control for satellite based packet switching networks

    NASA Technical Reports Server (NTRS)

    Chu, Pong P.; Ivancic, William D.; Kim, Heechul

    1993-01-01

    NASA LeRC is currently investigating a satellite architecture that incorporates on-board packet switching capability. Because of the statistical nature of packet switching, arrival traffic may fluctuate and thus it is necessary to integrate congestion control mechanism as part of the on-board processing unit. This study focuses on the closed-loop reactive control. We investigate the impact of the long propagation delay on the performance and propose a scheme to overcome the problem. The scheme uses a global feedback signal to regulate the packet arrival rate of ground stations. In this scheme, the satellite continuously broadcasts the status of its output buffer and the ground stations respond by selectively discarding packets or by tagging the excessive packets as low-priority. The two schemes are evaluated by theoretical queuing analysis and simulation. The former is used to analyze the simplified model and to determine the basic trends and bounds, and the later is used to assess the performance of a more realistic system and to evaluate the effectiveness of more sophisticated control schemes. The results show that the long propagation delay makes the closed-loop congestion control less responsive. The broadcasted information can only be used to extract statistical information. The discarding scheme needs carefully-chosen status information and reduction function, and normally requires a significant amount of ground discarding to reduce the on-board packet loss probability. The tagging scheme is more effective since it tolerates more uncertainties and allows a larger margin of error in status information. It can protect the high-priority packets from excessive loss and fully utilize the downlink bandwidth at the same time.

  18. A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm

    PubMed Central

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974

  19. Importance sampling variance reduction for the Fokker–Planck rarefied gas particle method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collyer, B.S., E-mail: benjamin.collyer@gmail.com; London Mathematical Laboratory, 14 Buckingham Street, London WC2N 6DF; Connaughton, C.

    The Fokker–Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find thatmore » our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.« less

  20. Geometric reduction of dynamical nonlocality in nanoscale quantum circuits.

    PubMed

    Strambini, E; Makarenko, K S; Abulizi, G; de Jong, M P; van der Wiel, W G

    2016-01-06

    Nonlocality is a key feature discriminating quantum and classical physics. Quantum-interference phenomena, such as Young's double slit experiment, are one of the clearest manifestations of nonlocality, recently addressed as dynamical to specify its origin in the quantum equations of motion. It is well known that loss of dynamical nonlocality can occur due to (partial) collapse of the wavefunction due to a measurement, such as which-path detection. However, alternative mechanisms affecting dynamical nonlocality have hardly been considered, although of crucial importance in many schemes for quantum information processing. Here, we present a fundamentally different pathway of losing dynamical nonlocality, demonstrating that the detailed geometry of the detection scheme is crucial to preserve nonlocality. By means of a solid-state quantum-interference experiment we quantify this effect in a diffusive system. We show that interference is not only affected by decoherence, but also by a loss of dynamical nonlocality based on a local reduction of the number of quantum conduction channels of the interferometer. With our measurements and theoretical model we demonstrate that this mechanism is an intrinsic property of quantum dynamics. Understanding the geometrical constraints protecting nonlocality is crucial when designing quantum networks for quantum information processing.

  1. A fast isogeometric BEM for the three dimensional Laplace- and Helmholtz problems

    NASA Astrophysics Data System (ADS)

    Dölz, Jürgen; Harbrecht, Helmut; Kurz, Stefan; Schöps, Sebastian; Wolf, Felix

    2018-03-01

    We present an indirect higher order boundary element method utilising NURBS mappings for exact geometry representation and an interpolation-based fast multipole method for compression and reduction of computational complexity, to counteract the problems arising due to the dense matrices produced by boundary element methods. By solving Laplace and Helmholtz problems via a single layer approach we show, through a series of numerical examples suitable for easy comparison with other numerical schemes, that one can indeed achieve extremely high rates of convergence of the pointwise potential through the utilisation of higher order B-spline-based ansatz functions.

  2. DOW-PR DOlphin and Whale Pods Routing Protocol for Underwater Wireless Sensor Networks (UWSNs).

    PubMed

    Wadud, Zahid; Ullah, Khadem; Hussain, Sajjad; Yang, Xiaodong; Qazi, Abdul Baseer

    2018-05-12

    Underwater Wireless Sensor Networks (UWSNs) have intrinsic challenges that include long propagation delays, high mobility of sensor nodes due to water currents, Doppler spread, delay variance, multipath, attenuation and geometric spreading. The existing Weighting Depth and Forwarding Area Division Depth Based Routing (WDFAD-DBR) protocol considers the weighting depth of the two hops in order to select the next Potential Forwarding Node (PFN). To improve the performance of WDFAD-DBR, we propose DOlphin and Whale Pod Routing protocol (DOW-PR). In this scheme, we divide the transmission range into a number of transmission power levels and at the same time select the next PFNs from forwarding and suppressed zones. In contrast to WDFAD-DBR, our scheme not only considers the packet upward advancement, but also takes into account the number of suppressed nodes and number of PFNs at the first and second hops. Consequently, reasonable energy reduction is observed while receiving and transmitting packets. Moreover, our scheme also considers the hops count of the PFNs from the sink. In the absence of PFNs, the proposed scheme will select the node from the suppressed region for broadcasting and thus ensures minimum loss of data. Besides this, we also propose another routing scheme (whale pod) in which multiple sinks are placed at water surface, but one sink is embedded inside the water and is physically connected with the surface sink through high bandwidth connection. Simulation results show that the proposed scheme has high Packet Delivery Ratio (PDR), low energy tax, reduced Accumulated Propagation Distance (APD) and increased the network lifetime.

  3. DOW-PR DOlphin and Whale Pods Routing Protocol for Underwater Wireless Sensor Networks (UWSNs)

    PubMed Central

    Wadud, Zahid; Ullah, Khadem; Hussain, Sajjad; Yang, Xiaodong; Qazi, Abdul Baseer

    2018-01-01

    Underwater Wireless Sensor Networks (UWSNs) have intrinsic challenges that include long propagation delays, high mobility of sensor nodes due to water currents, Doppler spread, delay variance, multipath, attenuation and geometric spreading. The existing Weighting Depth and Forwarding Area Division Depth Based Routing (WDFAD-DBR) protocol considers the weighting depth of the two hops in order to select the next Potential Forwarding Node (PFN). To improve the performance of WDFAD-DBR, we propose DOlphin and Whale Pod Routing protocol (DOW-PR). In this scheme, we divide the transmission range into a number of transmission power levels and at the same time select the next PFNs from forwarding and suppressed zones. In contrast to WDFAD-DBR, our scheme not only considers the packet upward advancement, but also takes into account the number of suppressed nodes and number of PFNs at the first and second hops. Consequently, reasonable energy reduction is observed while receiving and transmitting packets. Moreover, our scheme also considers the hops count of the PFNs from the sink. In the absence of PFNs, the proposed scheme will select the node from the suppressed region for broadcasting and thus ensures minimum loss of data. Besides this, we also propose another routing scheme (whale pod) in which multiple sinks are placed at water surface, but one sink is embedded inside the water and is physically connected with the surface sink through high bandwidth connection. Simulation results show that the proposed scheme has high Packet Delivery Ratio (PDR), low energy tax, reduced Accumulated Propagation Distance (APD) and increased the network lifetime. PMID:29757208

  4. Government health insurance for people below poverty line in India: quasi-experimental evaluation of insurance and health outcomes

    PubMed Central

    Bendavid, Eran; Mukherji, Arnab; Wagner, Zachary; Nagpal, Somil; Mullen, Patrick

    2014-01-01

    Objectives To evaluate the effects of a government insurance program covering tertiary care for people below the poverty line in Karnataka, India, on out-of-pocket expenditures, hospital use, and mortality. Design Geographic regression discontinuity study. Setting 572 villages in Karnataka, India. Participants 31 476 households (22 796 below poverty line and 8680 above poverty line) in 300 villages where the scheme was implemented and 28 633 households (21 767 below poverty line and 6866 above poverty line) in 272 neighboring matched villages ineligible for the scheme. Intervention A government insurance program (Vajpayee Arogyashree scheme) that provided free tertiary care to households below the poverty line in about half of villages in Karnataka from February 2010 to August 2012. Main outcome measure Out-of-pocket expenditures, hospital use, and mortality. Results Among households below the poverty line, the mortality rate from conditions potentially responsive to services covered by the scheme (mostly cardiac conditions and cancer) was 0.32% in households eligible for the scheme compared with 0.90% among ineligible households just south of the eligibility border (difference of 0.58 percentage points, 95% confidence interval 0.40 to 0.75; P<0.001). We found no difference in mortality rates for households above the poverty line (households above the poverty line were not eligible for the scheme), with a mortality rate from conditions covered by the scheme of 0.56% in eligible villages compared with 0.55% in ineligible villages (difference of 0.01 percentage points, −0.03 to 0.03; P=0.95). Eligible households had significantly reduced out-of-pocket health expenditures for admissions to hospitals with tertiary care facilities likely to be covered by the scheme (64% reduction, 35% to 97%; P<0.001). There was no significant increase in use of covered services, although the point estimate of a 44.2% increase approached significance (−5.1% to 90.5%; P=0.059). Both reductions in out-of-pocket expenditures and potential increases in use might have contributed to the observed reductions in mortality. Conclusions Insuring poor households for efficacious but costly and underused health services significantly improves population health in India. PMID:25214509

  5. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  6. Global potential energy surface of ground state singlet spin O4

    NASA Astrophysics Data System (ADS)

    Mankodi, Tapan K.; Bhandarkar, Upendra V.; Puranik, Bhalchandra P.

    2018-02-01

    A new global potential energy for the singlet spin state O4 system is reported using CASPT2/aug-cc-pVTZ ab initio calculations. The geometries for the six-dimensional surface are constructed using a novel point generation scheme that employs randomly generated configurations based on the beta distribution. The advantage of this scheme is apparent in the reduction of the number of required geometries for a reasonably accurate potential energy surface (PES) and the consequent decrease in the overall computational effort. The reported surface matches well with the recently published singlet surface by Paukku et al. [J. Chem. Phys. 147, 034301 (2017)]. In addition to the O4 PES, the ground state N4 PES is also constructed using the point generation scheme and compared with the existing PES [Y. Paukku et al., J. Chem. Phys. 139, 044309 (2013)]. The singlet surface is constructed with the aim of studying high energy O2-O2 collisions and predicting collision induced dissociation cross section to be used in simulating non-equilibrium aerothermodynamic flows.

  7. Data-dependent bucketing improves reference-free compression of sequencing reads.

    PubMed

    Patro, Rob; Kingsford, Carl

    2015-09-01

    The storage and transmission of high-throughput sequencing data consumes significant resources. As our capacity to produce such data continues to increase, this burden will only grow. One approach to reduce storage and transmission requirements is to compress this sequencing data. We present a novel technique to boost the compression of sequencing that is based on the concept of bucketing similar reads so that they appear nearby in the file. We demonstrate that, by adopting a data-dependent bucketing scheme and employing a number of encoding ideas, we can achieve substantially better compression ratios than existing de novo sequence compression tools, including other bucketing and reordering schemes. Our method, Mince, achieves up to a 45% reduction in file sizes (28% on average) compared with existing state-of-the-art de novo compression schemes. Mince is written in C++11, is open source and has been made available under the GPLv3 license. It is available at http://www.cs.cmu.edu/∼ckingsf/software/mince. carlk@cs.cmu.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  8. Immunity-based detection, identification, and evaluation of aircraft sub-system failures

    NASA Astrophysics Data System (ADS)

    Moncayo, Hever Y.

    This thesis describes the design, development, and flight-simulation testing of an integrated Artificial Immune System (AIS) for detection, identification, and evaluation of a wide variety of sensor, actuator, propulsion, and structural failures/damages including the prediction of the achievable states and other limitations on performance and handling qualities. The AIS scheme achieves high detection rate and low number of false alarms for all the failure categories considered. Data collected using a motion-based flight simulator are used to define the self for an extended sub-region of the flight envelope. The NASA IFCS F-15 research aircraft model is used and represents a supersonic fighter which include model following adaptive control laws based on non-linear dynamic inversion and artificial neural network augmentation. The flight simulation tests are designed to analyze and demonstrate the performance of the immunity-based aircraft failure detection, identification and evaluation (FDIE) scheme. A general robustness analysis is also presented by determining the achievable limits for a desired performance in the presence of atmospheric perturbations. For the purpose of this work, the integrated AIS scheme is implemented based on three main components. The first component performs the detection when one of the considered failures is present in the system. The second component consists in the identification of the failure category and the classification according to the failed element. During the third phase a general evaluation of the failure is performed with the estimation of the magnitude/severity of the failure and the prediction of its effect on reducing the flight envelope of the aircraft system. Solutions and alternatives to specific design issues of the AIS scheme, such as data clustering and empty space optimization, data fusion and duplication removal, definition of features, dimensionality reduction, and selection of cluster/detector shape are also analyzed in this thesis. They showed to have an important effect on detection performance and are a critical aspect when designing the configuration of the AIS. The results presented in this thesis show that the AIS paradigm addresses directly the complexity and multi-dimensionality associated with a damaged aircraft dynamic response and provides the tools necessary for a comprehensive/integrated solution to the FDIE problem. Excellent detection, identification, and evaluation performance has been recorded for all types of failures considered. The implementation of the proposed AIS-based scheme can potentially have a significant impact on the safety of aircraft operation. The output information obtained from the scheme will be useful to increase pilot situational awareness and determine automated compensation.

  9. Optimized multilayered wideband absorbers with graded fractal FSS

    NASA Astrophysics Data System (ADS)

    Vinoy, K. J.; Jose, K. A.; Varadan, Vijay K.; Varadan, Vasundara V.

    2001-08-01

    Various approaches have been followed for the reduction of radar cross section (RCS), especially of aircraft and missiles. In this paper we present the use of multiple layers of FSS-like fractal geometries printed on dielectric substrates for the same goal. The experimental results shown here indicate 15 dB reduction in the reflection of a flat surface, by the use of this configuration with low loss dielectrics. An extensive optimization scheme is required for extending the angle coverage as well as the bandwidth of the absorber. A brief investigation of such a scheme involving genetic algorithm for this purpose is also presented here.

  10. Automated detection of pulmonary nodules in PET/CT images: Ensemble false-positive reduction using a convolutional neural network technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teramoto, Atsushi, E-mail: teramoto@fujita-hu.ac.jp; Fujita, Hiroshi; Yamamuro, Osamu

    Purpose: Automated detection of solitary pulmonary nodules using positron emission tomography (PET) and computed tomography (CT) images shows good sensitivity; however, it is difficult to detect nodules in contact with normal organs, and additional efforts are needed so that the number of false positives (FPs) can be further reduced. In this paper, the authors propose an improved FP-reduction method for the detection of pulmonary nodules in PET/CT images by means of convolutional neural networks (CNNs). Methods: The overall scheme detects pulmonary nodules using both CT and PET images. In the CT images, a massive region is first detected using anmore » active contour filter, which is a type of contrast enhancement filter that has a deformable kernel shape. Subsequently, high-uptake regions detected by the PET images are merged with the regions detected by the CT images. FP candidates are eliminated using an ensemble method; it consists of two feature extractions, one by shape/metabolic feature analysis and the other by a CNN, followed by a two-step classifier, one step being rule based and the other being based on support vector machines. Results: The authors evaluated the detection performance using 104 PET/CT images collected by a cancer-screening program. The sensitivity in detecting candidates at an initial stage was 97.2%, with 72.8 FPs/case. After performing the proposed FP-reduction method, the sensitivity of detection was 90.1%, with 4.9 FPs/case; the proposed method eliminated approximately half the FPs existing in the previous study. Conclusions: An improved FP-reduction scheme using CNN technique has been developed for the detection of pulmonary nodules in PET/CT images. The authors’ ensemble FP-reduction method eliminated 93% of the FPs; their proposed method using CNN technique eliminates approximately half the FPs existing in the previous study. These results indicate that their method may be useful in the computer-aided detection of pulmonary nodules using PET/CT images.« less

  11. Adaptive Packet Combining Scheme in Three State Channel Model

    NASA Astrophysics Data System (ADS)

    Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak

    2018-01-01

    The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.

  12. An Empirical Cumulus Parameterization Scheme for a Global Spectral Model

    NASA Technical Reports Server (NTRS)

    Rajendran, K.; Krishnamurti, T. N.; Misra, V.; Tao, W.-K.

    2004-01-01

    Realistic vertical heating and drying profiles in a cumulus scheme is important for obtaining accurate weather forecasts. A new empirical cumulus parameterization scheme based on a procedure to improve the vertical distribution of heating and moistening over the tropics is developed. The empirical cumulus parameterization scheme (ECPS) utilizes profiles of Tropical Rainfall Measuring Mission (TRMM) based heating and moistening derived from the European Centre for Medium- Range Weather Forecasts (ECMWF) analysis. A dimension reduction technique through rotated principal component analysis (RPCA) is performed on the vertical profiles of heating (Q1) and drying (Q2) over the convective regions of the tropics, to obtain the dominant modes of variability. Analysis suggests that most of the variance associated with the observed profiles can be explained by retaining the first three modes. The ECPS then applies a statistical approach in which Q1 and Q2 are expressed as a linear combination of the first three dominant principal components which distinctly explain variance in the troposphere as a function of the prevalent large-scale dynamics. The principal component (PC) score which quantifies the contribution of each PC to the corresponding loading profile is estimated through a multiple screening regression method which yields the PC score as a function of the large-scale variables. The profiles of Q1 and Q2 thus obtained are found to match well with the observed profiles. The impact of the ECPS is investigated in a series of short range (1-3 day) prediction experiments using the Florida State University global spectral model (FSUGSM, T126L14). Comparisons between short range ECPS forecasts and those with the modified Kuo scheme show a very marked improvement in the skill in ECPS forecasts. This improvement in the forecast skill with ECPS emphasizes the importance of incorporating realistic vertical distributions of heating and drying in the model cumulus scheme. This also suggests that in the absence of explicit models for convection, the proposed statistical scheme improves the modeling of the vertical distribution of heating and moistening in areas of deep convection.

  13. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  14. A computerized scheme for lung nodule detection in multiprojection chest radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo Wei; Li Qiang; Boyce, Sarah J.

    2012-04-15

    Purpose: Our previous study indicated that multiprojection chest radiography could significantly improve radiologists' performance for lung nodule detection in clinical practice. In this study, the authors further verify that multiprojection chest radiography can greatly improve the performance of a computer-aided diagnostic (CAD) scheme. Methods: Our database consisted of 59 subjects, including 43 subjects with 45 nodules and 16 subjects without nodules. The 45 nodules included 7 real and 38 simulated ones. The authors developed a conventional CAD scheme and a new fusion CAD scheme to detect lung nodules. The conventional CAD scheme consisted of four steps for (1) identification ofmore » initial nodule candidates inside lungs, (2) nodule candidate segmentation based on dynamic programming, (3) extraction of 33 features from nodule candidates, and (4) false positive reduction using a piecewise linear classifier. The conventional CAD scheme processed each of the three projection images of a subject independently and discarded the correlation information between the three images. The fusion CAD scheme included the four steps in the conventional CAD scheme and two additional steps for (5) registration of all candidates in the three images of a subject, and (6) integration of correlation information between the registered candidates in the three images. The integration step retained all candidates detected at least twice in the three images of a subject and removed those detected only once in the three images as false positives. A leave-one-subject-out testing method was used for evaluation of the performance levels of the two CAD schemes. Results: At the sensitivities of 70%, 65%, and 60%, our conventional CAD scheme reported 14.7, 11.3, and 8.6 false positives per image, respectively, whereas our fusion CAD scheme reported 3.9, 1.9, and 1.2 false positives per image, and 5.5, 2.8, and 1.7 false positives per patient, respectively. The low performance of the conventional CAD scheme may be attributed to the high noise level in chest radiography, and the small size and low contrast of most nodules. Conclusions: This study indicated that the fusion of correlation information in multiprojection chest radiography can markedly improve the performance of CAD scheme for lung nodule detection.« less

  15. Bribe and Punishment: An Evolutionary Game-Theoretic Analysis of Bribery.

    PubMed

    Verma, Prateek; Sengupta, Supratim

    2015-01-01

    Harassment bribes, paid by citizens to corrupt officers for services the former are legally entitled to, constitute one of the most widespread forms of corruption in many countries. Nation states have adopted different policies to address this form of corruption. While some countries make both the bribe giver and the bribe taker equally liable for the crime, others impose a larger penalty on corrupt officers. We examine the consequences of asymmetric and symmetric penalties by developing deterministic and stochastic evolutionary game-theoretic models of bribery. We find that the asymmetric penalty scheme can lead to a reduction in incidents of bribery. However, the extent of reduction depends on how the players update their strategies over time. If the interacting members change their strategies with a probability proportional to the payoff of the alternative strategy option, the reduction in incidents of bribery is less pronounced. Our results indicate that changing from a symmetric to an asymmetric penalty scheme may not suffice in achieving significant reductions in incidents of harassment bribery.

  16. Bribe and Punishment: An Evolutionary Game-Theoretic Analysis of Bribery

    PubMed Central

    Verma, Prateek; Sengupta, Supratim

    2015-01-01

    Harassment bribes, paid by citizens to corrupt officers for services the former are legally entitled to, constitute one of the most widespread forms of corruption in many countries. Nation states have adopted different policies to address this form of corruption. While some countries make both the bribe giver and the bribe taker equally liable for the crime, others impose a larger penalty on corrupt officers. We examine the consequences of asymmetric and symmetric penalties by developing deterministic and stochastic evolutionary game-theoretic models of bribery. We find that the asymmetric penalty scheme can lead to a reduction in incidents of bribery. However, the extent of reduction depends on how the players update their strategies over time. If the interacting members change their strategies with a probability proportional to the payoff of the alternative strategy option, the reduction in incidents of bribery is less pronounced. Our results indicate that changing from a symmetric to an asymmetric penalty scheme may not suffice in achieving significant reductions in incidents of harassment bribery. PMID:26204110

  17. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  18. Sources, paths, and concepts for reduction of noise in the test section of the NASA Langley 4x7m wind tunnel

    NASA Technical Reports Server (NTRS)

    Hayden, R. E.; Wilby, J. F.

    1984-01-01

    NASA is investigating the feasibility of modifying the 4x7m Wind Tunnel at the Langley Research Center to make it suitable for a variety of aeroacoustic testing applications, most notably model helicopter rotors. The amount of noise reduction required to meet NASA's goal for test section background noise was determined, the predominant sources and paths causing the background noise were quantified, and trade-off studies between schemes to reduce fan noise at the source and those to attenuate the sound generated in the circuit between the sources and the test section were carried out. An extensive data base is also presented on circuit sources and paths.

  19. Laboratory for Engineering Man/Machine Systems (LEMS): System identification, model reduction and deconvolution filtering using Fourier based modulating signals and high order statistics

    NASA Technical Reports Server (NTRS)

    Pan, Jianqiang

    1992-01-01

    Several important problems in the fields of signal processing and model identification, such as system structure identification, frequency response determination, high order model reduction, high resolution frequency analysis, deconvolution filtering, and etc. Each of these topics involves a wide range of applications and has received considerable attention. Using the Fourier based sinusoidal modulating signals, it is shown that a discrete autoregressive model can be constructed for the least squares identification of continuous systems. Some identification algorithms are presented for both SISO and MIMO systems frequency response determination using only transient data. Also, several new schemes for model reduction were developed. Based upon the complex sinusoidal modulating signals, a parametric least squares algorithm for high resolution frequency estimation is proposed. Numerical examples show that the proposed algorithm gives better performance than the usual. Also, the problem was studied of deconvolution and parameter identification of a general noncausal nonminimum phase ARMA system driven by non-Gaussian stationary random processes. Algorithms are introduced for inverse cumulant estimation, both in the frequency domain via the FFT algorithms and in the domain via the least squares algorithm.

  20. A Two-Timescale Discretization Scheme for Collocation

    NASA Technical Reports Server (NTRS)

    Desai, Prasun; Conway, Bruce A.

    2004-01-01

    The development of a two-timescale discretization scheme for collocation is presented. This scheme allows a larger discretization to be utilized for smoothly varying state variables and a second finer discretization to be utilized for state variables having higher frequency dynamics. As such. the discretization scheme can be tailored to the dynamics of the particular state variables. In so doing. the size of the overall Nonlinear Programming (NLP) problem can be reduced significantly. Two two-timescale discretization architecture schemes are described. Comparison of results between the two-timescale method and conventional collocation show very good agreement. Differences of less than 0.5 percent are observed. Consequently. a significant reduction (by two-thirds) in the number of NLP parameters and iterations required for convergence can be achieved without sacrificing solution accuracy.

  1. Control of parallel manipulators using force feedback

    NASA Technical Reports Server (NTRS)

    Nanua, Prabjot

    1994-01-01

    Two control schemes are compared for parallel robotic mechanisms actuated by hydraulic cylinders. One scheme, the 'rate based scheme', uses the position and rate information only for feedback. The second scheme, the 'force based scheme' feeds back the force information also. The force control scheme is shown to improve the response over the rate control one. It is a simple constant gain control scheme better suited to parallel mechanisms. The force control scheme can be easily modified for the dynamic forces on the end effector. This paper presents the results of a computer simulation of both the rate and force control schemes. The gains in the force based scheme can be individually adjusted in all three directions, whereas the adjustment in just one direction of the rate based scheme directly affects the other two directions.

  2. JPEG XS-based frame buffer compression inside HEVC for power-aware video compression

    NASA Astrophysics Data System (ADS)

    Willème, Alexandre; Descampe, Antonin; Rouvroy, Gaël.; Pellegrin, Pascal; Macq, Benoit

    2017-09-01

    With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec's total consumption. This paper describes a solution to significantly decrease the FB's bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB's bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.

  3. A Hybrid Classification System for Heart Disease Diagnosis Based on the RFRS Method.

    PubMed

    Liu, Xiao; Wang, Xiaoli; Su, Qiang; Zhang, Mo; Zhu, Yanhong; Wang, Qiugen; Wang, Qian

    2017-01-01

    Heart disease is one of the most common diseases in the world. The objective of this study is to aid the diagnosis of heart disease using a hybrid classification system based on the ReliefF and Rough Set (RFRS) method. The proposed system contains two subsystems: the RFRS feature selection system and a classification system with an ensemble classifier. The first system includes three stages: (i) data discretization, (ii) feature extraction using the ReliefF algorithm, and (iii) feature reduction using the heuristic Rough Set reduction algorithm that we developed. In the second system, an ensemble classifier is proposed based on the C4.5 classifier. The Statlog (Heart) dataset, obtained from the UCI database, was used for experiments. A maximum classification accuracy of 92.59% was achieved according to a jackknife cross-validation scheme. The results demonstrate that the performance of the proposed system is superior to the performances of previously reported classification techniques.

  4. Manifold Embedding and Semantic Segmentation for Intraoperative Guidance With Hyperspectral Brain Imaging.

    PubMed

    Ravi, Daniele; Fabelo, Himar; Callic, Gustavo Marrero; Yang, Guang-Zhong

    2017-09-01

    Recent advances in hyperspectral imaging have made it a promising solution for intra-operative tissue characterization, with the advantages of being non-contact, non-ionizing, and non-invasive. Working with hyperspectral images in vivo, however, is not straightforward as the high dimensionality of the data makes real-time processing challenging. In this paper, a novel dimensionality reduction scheme and a new processing pipeline are introduced to obtain a detailed tumor classification map for intra-operative margin definition during brain surgery. However, existing approaches to dimensionality reduction based on manifold embedding can be time consuming and may not guarantee a consistent result, thus hindering final tissue classification. The proposed framework aims to overcome these problems through a process divided into two steps: dimensionality reduction based on an extension of the T-distributed stochastic neighbor approach is first performed and then a semantic segmentation technique is applied to the embedded results by using a Semantic Texton Forest for tissue classification. Detailed in vivo validation of the proposed method has been performed to demonstrate the potential clinical value of the system.

  5. Multichannel feedforward control schemes with coupling compensation for active sound profiling

    NASA Astrophysics Data System (ADS)

    Mosquera-Sánchez, Jaime A.; Desmet, Wim; de Oliveira, Leopoldo P. R.

    2017-05-01

    Active sound profiling includes a number of control techniques that enables the equalization, rather than the mere reduction, of acoustic noise. Challenges may rise when trying to achieve distinct targeted sound profiles simultaneously at multiple locations, e.g., within a vehicle cabin. This paper introduces distributed multichannel control schemes for independently tailoring structural borne sound reaching a number of locations within a cavity. The proposed techniques address the cross interactions amongst feedforward active sound profiling units, which compensate for interferences of the primary sound at each location of interest by exchanging run-time data amongst the control units, while attaining the desired control targets. Computational complexity, convergence, and stability of the proposed multichannel schemes are examined in light of the physical system at which they are implemented. The tuning performance of the proposed algorithms is benchmarked with the centralized and pure-decentralized control schemes through computer simulations on a simplified numerical model, which has also been subjected to plant magnitude variations. Provided that the representation of the plant is accurate enough, the proposed multichannel control schemes have been shown as the only ones that properly deliver targeted active sound profiling tasks at each error sensor location. Experimental results in a 1:3-scaled vehicle mock-up further demonstrate that the proposed schemes are able to attain reductions of more than 60 dB upon periodic disturbances at a number of positions, while resolving cross-channel interferences. Moreover, when the sensor/actuator placement is found as defective at a given frequency, the inclusion of a regularization parameter in the cost function is seen to not hinder the proper operation of the proposed compensation schemes, at the time that it assures their stability, at the expense of losing control performance.

  6. A pilot quality assurance scheme for diabetic retinopathy risk reduction programmes.

    PubMed

    Garvican, L; Scanlon, P H

    2004-10-01

    We describe a pilot study of measurement of quality assurance targets for diabetic retinopathy screening and performance comparison between 10 existing services, in preparation for the roll-out of the national programme. In 1999 the UK National Screening Committee approved proposals for a national diabetic retinopathy risk reduction programme, including recommendations for quality assurance, but implementation was held pending publication of the National Service Framework for Diabetes. Existing services requested the authors to perform a pilot study of a QA scheme, indicating willingness to contribute data for comparison. Objectives and quality standards were developed, following consultation with diabetologists, ophthalmologists and retinal screeners. Services submitted 2001/2 performance data, in response to a questionnaire, for anonymization, central analysis and comparison. The 17 quality standards encompass all aspects of the programme from identification of patients to timeliness of treatment. Ten programmes took part, submitting all the data available. All returns were incomplete, but especially so from the optometry-based schemes. Eight or more services demonstrated they could reach the minimum level in only five of the 17 standards. Thirty per cent could not provide coverage data. All were running behind. Reasons for difficulties in obtaining data and/or failing to achieve standards included severe under-funding and little previous experience of QA. Information systems were limited and incompatible between diabetes and eye units, and there was a lack of co-ordinated management of the whole programme. Quality assurance is time-consuming, expensive and inadequately resourced. The pilot study identified priorities for local action. National programme implementation must involve integral quality assurance mechanisms from the outset.

  7. Modeling of frequency-domain scalar wave equation with the average-derivative optimal scheme based on a multigrid-preconditioned iterative solver

    NASA Astrophysics Data System (ADS)

    Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue

    2018-01-01

    An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.

  8. Determining the Impact of Personal Mobility Carbon Allowance Schemes in Transportation Networks

    DOE PAGES

    Aziz, H. M. Abdul; Ukkusuri, Satish V.; Zhan, Xianyuan

    2016-10-17

    We know that personal mobility carbon allowance (PMCA) schemes are designed to reduce carbon consumption from transportation networks. PMCA schemes influence the travel decision process of users and accordingly impact the system metrics including travel time and greenhouse gas (GHG) emissions. Here, we develop a multi-user class dynamic user equilibrium model to evaluate the transportation system performance when PMCA scheme is implemented. The results using Sioux-Falls test network indicate that PMCA schemes can achieve the emissions reduction goals for transportation networks. Further, users characterized by high value of travel time are found to be less sensitive to carbon budget inmore » the context of work trips. Results also show that PMCA scheme can lead to higher emissions for a path compared with the case without PMCA because of flow redistribution. The developed network equilibrium model allows us to examine the change in system states at different carbon allocation levels and to design parameters of PMCA schemes accounting for population heterogeneity.« less

  9. Determining the Impact of Personal Mobility Carbon Allowance Schemes in Transportation Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, H. M. Abdul; Ukkusuri, Satish V.; Zhan, Xianyuan

    We know that personal mobility carbon allowance (PMCA) schemes are designed to reduce carbon consumption from transportation networks. PMCA schemes influence the travel decision process of users and accordingly impact the system metrics including travel time and greenhouse gas (GHG) emissions. Here, we develop a multi-user class dynamic user equilibrium model to evaluate the transportation system performance when PMCA scheme is implemented. The results using Sioux-Falls test network indicate that PMCA schemes can achieve the emissions reduction goals for transportation networks. Further, users characterized by high value of travel time are found to be less sensitive to carbon budget inmore » the context of work trips. Results also show that PMCA scheme can lead to higher emissions for a path compared with the case without PMCA because of flow redistribution. The developed network equilibrium model allows us to examine the change in system states at different carbon allocation levels and to design parameters of PMCA schemes accounting for population heterogeneity.« less

  10. The Impact of Vocational Education on Poverty Reduction, Quality Assurance and Mobility on Regional Labour Markets--Selected EU-Funded Schemes

    ERIC Educational Resources Information Center

    Wallenborn, Manfred

    2009-01-01

    Vocational education can serve to promote social stability and sustainable economic and social development. The European Union (EU) strategically employs a range of vocational educational schemes to attain these overriding goals. Topical points of focus are selected in line with requirements in the individual partner countries or regions. However,…

  11. Limited effect of anthropogenic nitrogen oxides on secondary organic aerosol formation

    DOE PAGES

    Zheng, Y.; Unger, N.; Hodzic, A.; ...

    2015-12-08

    Globally, secondary organic aerosol (SOA) is mostly formed from emissions of biogenic volatile organic compounds (VOCs) by vegetation, but it can be modified by human activities as demonstrated in recent research. Specifically, nitrogen oxides (NO x = NO + NO 2) have been shown to play a critical role in the chemical formation of low volatility compounds. We have updated the SOA scheme in the global NCAR (National Center for Atmospheric Research) Community Atmospheric Model version 4 with chemistry (CAM4-chem) by implementing a 4-product volatility basis set (VBS) scheme, including NO x-dependent SOA yields and aging parameterizations. Small differences aremore » found for the no-aging VBS and 2-product schemes; large increases in SOA production and the SOA-to-OA ratio are found for the aging scheme. The predicted organic aerosol amounts capture both the magnitude and distribution of US surface annual mean measurements from the Interagency Monitoring of Protected Visual Environments (IMPROVE) network by 50 %, and the simulated vertical profiles are within a factor of 2 compared to aerosol mass spectrometer (AMS) measurements from 13 aircraft-based field campaigns across different regions and seasons. We then perform sensitivity experiments to examine how the SOA loading responds to a 50 % reduction in anthropogenic nitric oxide (NO) emissions in different regions. We find limited SOA reductions of 0.9–5.6, 6.4–12.0 and 0.9–2.8 % for global, southeast US and Amazon NO x perturbations, respectively. The fact that SOA formation is almost unaffected by changes in NO x can be largely attributed to a limited shift in chemical regime, to buffering in chemical pathways (low- and high-NO x pathways, O 3 versus NO 3-initiated oxidation) and to offsetting tendencies in the biogenic versus anthropogenic SOA responses.« less

  12. The Critical Role of the Routing Scheme in Simulating Peak River Discharge in Global Hydrological Models

    NASA Technical Reports Server (NTRS)

    Zhao, Fang; Veldkamp, Ted I. E.; Frieler, Katja; Schewe, Jacob; Ostberg, Sebastian; Willner, Sven; Schauberger, Bernhard; Gosling, Simon N.; Schmied, Hannes Muller; Portmann, Felix T.; hide

    2017-01-01

    Global hydrological models (GHMs) have been applied to assess global flood hazards, but their capacity to capture the timing and amplitude of peak river discharge which is crucial in flood simulations has traditionally not been the focus of examination. Here we evaluate to what degree the choice of river routing scheme affects simulations of peak discharge and may help to provide better agreement with observations. To this end we use runoff and discharge simulations of nine GHMs forced by observational climate data (1971-2010) within the ISIMIP2a (Inter-Sectoral Impact Model Intercomparison Project phase 2a) project. The runoff simulations were used as input for the global river routing model CaMa-Flood (Catchment-based Macro-scale Floodplain). The simulated daily discharge was compared to the discharge generated by each GHM using its native river routing scheme. For each GHM both versions of simulated discharge were compared to monthly and daily discharge observations from 1701 GRDC (Global Runoff Data Centre) stations as a benchmark. CaMa-Flood routing shows a general reduction of peak river discharge and a delay of about two to three weeks in its occurrence, likely induced by the buffering capacity of floodplain reservoirs. For a majority of river basins, discharge produced by CaMa-Flood resulted in a better agreement with observations. In particular, maximum daily discharge was adjusted, with a multi-model averaged reduction in bias over about two-thirds of the analysed basin area. The increase in agreement was obtained in both managed and near-natural basins. Overall, this study demonstrates the importance of routing scheme choice in peak discharge simulation, where CaMa-Flood routing accounts for floodplain storage and backwater effects that are not represented in most GHMs. Our study provides important hints that an explicit parameterisation of these processes may be essential in future impact studies.

  13. A Temporal Locality-Aware Page-Mapped Flash Translation Layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Gupta, Aayush; Urgaonkar, Bhuvan

    2013-01-01

    The poor performance of random writes has been a cause of major concern which needs to be addressed to better utilize the potential of flash in enterprise-scale environments. We examine one of the important causes of this poor performance: the design of the flash translation layer (FTL) which performs the virtual-to-physical address translations and hides the erase-before-write characteristics of flash. We propose a complete paradigm shift in the design of the core FTL engine from the existing techniques with our Demand-Based Flash Translation Layer (DFTL) which selectively caches page- level address mappings. Our experimental evaluation using FlashSim with realistic enterprise-scalemore » workloads endorses the utility of DFTL in enterprise-scale storage systems by demonstrating: 1) improved performance, 2) reduced garbage collection overhead and 3) better overload behavior compared with hybrid FTL schemes which are the most popular implementation methods. For example, a predominantly random-write dominant I/O trace from an OLTP application running at a large financial institution shows a 78% improvement in average response time (due to a 3-fold reduction in operations of the garbage collector), compared with the hybrid FTL scheme. Even for the well-known read-dominant TPC-H benchmark, for which DFTL introduces additional overheads, we improve system response time by 56%. Moreover, interestingly, when write-back cache on DFTL-based SSD is enabled, DFTL even outperforms the page-based FTL scheme, improving their response time by 72% in Financial trace.« less

  14. Medical X-ray Image Hierarchical Classification Using a Merging and Splitting Scheme in Feature Space.

    PubMed

    Fesharaki, Nooshin Jafari; Pourghassem, Hossein

    2013-07-01

    Due to the daily mass production and the widespread variation of medical X-ray images, it is necessary to classify these for searching and retrieving proposes, especially for content-based medical image retrieval systems. In this paper, a medical X-ray image hierarchical classification structure based on a novel merging and splitting scheme and using shape and texture features is proposed. In the first level of the proposed structure, to improve the classification performance, similar classes with regard to shape contents are grouped based on merging measures and shape features into the general overlapped classes. In the next levels of this structure, the overlapped classes split in smaller classes based on the classification performance of combination of shape and texture features or texture features only. Ultimately, in the last levels, this procedure is also continued forming all the classes, separately. Moreover, to optimize the feature vector in the proposed structure, we use orthogonal forward selection algorithm according to Mahalanobis class separability measure as a feature selection and reduction algorithm. In other words, according to the complexity and inter-class distance of each class, a sub-space of the feature space is selected in each level and then a supervised merging and splitting scheme is applied to form the hierarchical classification. The proposed structure is evaluated on a database consisting of 2158 medical X-ray images of 18 classes (IMAGECLEF 2005 database) and accuracy rate of 93.6% in the last level of the hierarchical structure for an 18-class classification problem is obtained.

  15. ENDOGENOUS REDUCTANTS SUPPORT THE CATALYTIC FUNCTION OF REOMBINANT RAT CYT119, AN ARSENIC METHYLTRANSFERASE

    EPA Science Inventory

    The postulated scheme for the metabolism of inorganic As involves alternating steps of oxidative methylation and of reduction of As from the pentavalent to the trivalent oxidation state, producing methylated compounds containing AsIII that are highly reactive and toxic. S-adenosy...

  16. Chaotic reconfigurable ZCMT precoder for OFDM data encryption and PAPR reduction

    NASA Astrophysics Data System (ADS)

    Chen, Han; Yang, Xuelin; Hu, Weisheng

    2017-12-01

    A secure orthogonal frequency division multiplexing (OFDM) transmission scheme precoded by chaotic Zadoff-Chu matrix transform (ZCMT) is proposed and demonstrated. It is proved that the reconfigurable ZCMT matrices after row/column permutations can be applied as an alternative precoder for peak-to-average power ratio (PAPR) reduction. The permutations and the reconfigurable parameters in ZCMT matrix are generated by a hyper digital chaos, in which a huge key space of ∼ 10800 is created for physical-layer OFDM data encryption. An encrypted data transmission of 8.9 Gb/s optical OFDM signals is successfully demonstrated over 20 km standard single-mode fiber (SSMF) for 16-QAM. The BER performance of the encrypted signals is improved by ∼ 2 dB (BER@ 10-3), which is mainly attributed to the effective reduction of PAPR via chaotic ZCMT precoding. Moreover, the chaotic ZCMT precoding scheme requires no sideband information, thus the spectrum efficiency is enhanced during transmission.

  17. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  18. Review of anhydrous zirconium-hafnium separation techniques. Information circular/1984

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skaggs, R.L.; Rogers, D.T.; Hunter, D.B.

    1983-12-01

    Sixteen nonaqueous techniques conceived to replace the current aqueous scheme for separating hafnium and zirconium tetrachlorides were reviewed and evaluated by the Bureau of Mines. The methods are divided into two classes: separation by fractional volatilization of the tetrachlorides, which takes advantage of the higher volatility of hafnium tetrachloride; and separation by chemical techniques, based on differences in chemical behavior of the two tetrachlorides. The criteria used to evaluate separation methods were temperature, pressure, separation factor per equilibrium stage, complexity, compatibility with existing technology, and potential for continuous operation. Three processes were selected as being most promising: (1) high-pressure distillation,more » (2) extractive distillation from a molten salt, and (3) preferential reduction of gaseous ZrCl4. Any of the proposed nonaqueous Hf-Zr separation schemes must be supplemented with additional purification to remove trace impurities.« less

  19. A Scheme for Targeting Optical SETI Observations

    NASA Astrophysics Data System (ADS)

    Shostak, Seth

    2004-06-01

    In optical SETI (OSETI) experiments, it is generally assumed that signals will be deliberate, narrowly targeted beacons sent by extraterrestrial societies to large numbers of candidate star systems. If this is so, then it may be unrealistic to expect a high duty cycle for the received signal. Ergo, an advantage accrues to any OSETI scheme that realistically suggests where and when to search. In this paper, we elaborate a proposal (Castellano, Doyle, &McIntosh 2000) for selecting regions of sky for intensive optical SETI monitoring based on characteristics of our solar system that would be visible at great distance. This can enormously lessen the amount of sky that needs to be searched. In addition, this is an attractive approach for the transmitting society because it both increases the chances of reception and provides a large reduction in energy required. With good astrometric information, the transmitter need be no more powerful than an automobile tail light.

  20. Demonstration of Cascaded Modulator-Chicane Microbunching of a Relativistic Electron Beam

    DOE PAGES

    Sudar, N.; Musumeci, P.; Gadjev, I.; ...

    2018-03-15

    Here, we present results of an experiment showing the first successful demonstration of a cascaded microbunching scheme. Two modulator-chicane prebunchers arranged in series and a high power mid-IR laser seed are used to modulate a 52 MeV electron beam into a train of sharp microbunches phase locked to the external drive laser. This configuration is shown to greatly improve matching of the beam into the small longitudinal phase space acceptance of short-wavelength accelerators. We demonstrate trapping of nearly all (96%) of the electrons in a strongly tapered inverse free-electron laser accelerator, with an order-of-magnitude reduction in injection losses compared tomore » the classical single-buncher scheme. These results represent a critical advance in laser-based longitudinal phase space manipulations and find application in high gradient advanced acceleration as well as in high peak and average power coherent radiation sources.« less

  1. Evaluation of viscous drag reduction schemes for subsonic transports

    NASA Technical Reports Server (NTRS)

    Marino, A.; Economos, C.; Howard, F. G.

    1975-01-01

    The results are described of a theoretical study of viscous drag reduction schemes for potential application to the fuselage of a long-haul subsonic transport aircraft. The schemes which were examined included tangential slot injection on the fuselage and various synergetic combinations of tangential slot injection and distributed suction applied to wing and fuselage surfaces. Both passive and mechanical (utilizing turbo-machinery) systems were examined. Overall performance of the selected systems was determined at a fixed subsonic cruise condition corresponding to a flight Mach number of free stream M = 0.8 and an altitude of 11,000 m. The nominal aircraft to which most of the performance data was referenced was a wide-body transport of the Boeing 747 category. Some of the performance results obtained with wing suction are referenced to a Lockheed C-141 Star Lifter wing section. Alternate designs investigated involved combinations of boundary layer suction on the wing surfaces and injection on the fuselage, and suction and injection combinations applied to the fuselage only.

  2. A novel multi-scale adaptive sampling-based approach for energy saving in leak detection for WSN-based water pipelines

    NASA Astrophysics Data System (ADS)

    Saqib, Najam us; Faizan Mysorewala, Muhammad; Cheded, Lahouari

    2017-12-01

    In this paper, we propose a novel monitoring strategy for a wireless sensor networks (WSNs)-based water pipeline network. Our strategy uses a multi-pronged approach to reduce energy consumption based on the use of two types of vibration sensors and pressure sensors, all having different energy levels, and a hierarchical adaptive sampling mechanism to determine the sampling frequency. The sampling rate of the sensors is adjusted according to the bandwidth of the vibration signal being monitored by using a wavelet-based adaptive thresholding scheme that calculates the new sampling frequency for the following cycle. In this multimodal sensing scheme, the duty-cycling approach is used for all sensors to reduce the sampling instances, such that the high-energy, high-precision (HE-HP) vibration sensors have low duty cycles, and the low-energy, low-precision (LE-LP) vibration sensors have high duty cycles. The low duty-cycling (HE-HP) vibration sensor adjusts the sampling frequency of the high duty-cycling (LE-LP) vibration sensor. The simulated test bed considered here consists of a water pipeline network which uses pressure and vibration sensors, with the latter having different energy consumptions and precision levels, at various locations in the network. This is all the more useful for energy conservation for extended monitoring. It is shown that by using the novel features of our proposed scheme, a significant reduction in energy consumption is achieved and the leak is effectively detected by the sensor node that is closest to it. Finally, both the total energy consumed by monitoring as well as the time to detect the leak by a WSN node are computed, and show the superiority of our proposed hierarchical adaptive sampling algorithm over a non-adaptive sampling approach.

  3. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  4. Speckle reduction in optical coherence tomography images based on wave atoms

    PubMed Central

    Du, Yongzhao; Liu, Gangjun; Feng, Guoying; Chen, Zhongping

    2014-01-01

    Abstract. Optical coherence tomography (OCT) is an emerging noninvasive imaging technique, which is based on low-coherence interferometry. OCT images suffer from speckle noise, which reduces image contrast. A shrinkage filter based on wave atoms transform is proposed for speckle reduction in OCT images. Wave atoms transform is a new multiscale geometric analysis tool that offers sparser expansion and better representation for images containing oscillatory patterns and textures than other traditional transforms, such as wavelet and curvelet transforms. Cycle spinning-based technology is introduced to avoid visual artifacts, such as Gibbs-like phenomenon, and to develop a translation invariant wave atoms denoising scheme. The speckle suppression degree in the denoised images is controlled by an adjustable parameter that determines the threshold in the wave atoms domain. The experimental results show that the proposed method can effectively remove the speckle noise and improve the OCT image quality. The signal-to-noise ratio, contrast-to-noise ratio, average equivalent number of looks, and cross-correlation (XCOR) values are obtained, and the results are also compared with the wavelet and curvelet thresholding techniques. PMID:24825507

  5. A novel framework for objective detection and tracking of TC center from noisy satellite imagery

    NASA Astrophysics Data System (ADS)

    Johnson, Bibin; Thomas, Sachin; Rani, J. Sheeba

    2018-07-01

    This paper proposes a novel framework for automatically determining and tracking the center of a tropical cyclone (TC) during its entire life-cycle from the Thermal infrared (TIR) channel data of the geostationary satellite. The proposed method handles meteorological images with noise, missing or partial information due to the seasonal variability and lack of significant spatial or vortex features. To retrieve the cyclone center from these circumstances, a synergistic approach based on objective measures and Numerical Weather Prediction (NWP) model is being proposed. This method employs a spatial gradient scheme to process missing and noisy frames or a spatio-temporal gradient scheme for image sequences that are continuous and contain less noise. The initial estimate of the TC center from the missing imagery is corrected by exploiting a NWP model based post-processing scheme. The validity of the framework is tested on Infrared images of different cyclones obtained from various Geostationary satellites such as the Meteosat-7, INSAT- 3 D , Kalpana-1 etc. The computed track is compared with the actual track data obtained from Joint Typhoon Warning Center (JTWC), and it shows a reduction of mean track error by 11 % as compared to the other state of the art methods in the presence of missing and noisy frames. The proposed method is also successfully tested for simultaneous retrieval of the TC center from images containing multiple non-overlapping cyclones.

  6. Mechanisms of Phosphorus Acquisition and Lipid Class Remodeling under P Limitation in a Marine Microalga1[OPEN

    PubMed Central

    Winge, Per; El Assimi, Aimen; Jouhet, Juliette; Vadstein, Olav

    2017-01-01

    Molecular mechanisms of phosphorus (P) limitation are of great interest for understanding algal production in aquatic ecosystems. Previous studies point to P limitation-induced changes in lipid composition. As, in microalgae, the molecular mechanisms of this specific P stress adaptation remain unresolved, we reveal a detailed phospholipid-recycling scheme in Nannochloropsis oceanica and describe important P acquisition genes based on highly corresponding transcriptome and lipidome data. Initial responses to P limitation showed increased expression of genes involved in P uptake and an expansion of the P substrate spectrum based on purple acid phosphatases. Increase in P trafficking displayed a rearrangement between compartments by supplying P to the chloroplast and carbon to the cytosol for lipid synthesis. We propose a novel phospholipid-recycling scheme for algae that leads to the rapid reduction of phospholipids and synthesis of the P-free lipid classes. P mobilization through membrane lipid degradation is mediated mainly by two glycerophosphoryldiester phosphodiesterases and three patatin-like phospholipases A on the transcriptome level. To compensate for low phospholipids in exponential growth, N. oceanica synthesized sulfoquinovosyldiacylglycerol and diacylglyceroltrimethylhomoserine. In this study, it was shown that an N. oceanica strain has a unique repertoire of genes that facilitate P acquisition and the degradation of phospholipids compared with other stramenopiles. The novel phospholipid-recycling scheme opens new avenues for metabolic engineering of lipid composition in algae. PMID:29051196

  7. Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.

    PubMed

    Yang, Shengxiang

    2008-01-01

    In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.

  8. Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy

    NASA Astrophysics Data System (ADS)

    Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente

    2017-02-01

    We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes.

  9. Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy

    PubMed Central

    Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente

    2017-01-01

    We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes. PMID:28233829

  10. Assessment of Co-benefits of vehicle emission reduction measures for 2015-2020 in the Pearl River Delta region, China.

    PubMed

    Liu, Yong-Hong; Liao, Wen-Yuan; Lin, Xiao-Fang; Li, Li; Zeng, Xue-Lan

    2017-04-01

    Vehicle emissions have become one of the key factors affecting the urban air quality and climate change in the Pearl River Delta (PRD) region, so it is important to design policies of emission reduction based on quantitative Co-benefits for air pollutants and greenhouse gas (GHG). Emissions of air pollutants and GHG by 2020 was predicted firstly based on the no-control scenario, and five vehicle emissions reduction scenarios were designed in view of the economy, technology and policy, whose emissions reduction were calculated. Then Co-benefits between air pollutants and GHG were quantitatively analyzed by the methods of coordinate system and cross-elasticity. Results show that the emissions reduction effects and the Co-benefits of different measures vary greatly in 2015-2020. If no control scheme was applied, most air pollutants and GHG would increase substantially by 20-64% by 2020, with the exception of CO, VOC and PM 2.5 . Different control measures had different reduction effects for single air pollutant and GHG. The worst reduction measure was Eliminating Motorcycles with average reducing rate 0.09% for air pollutants and GHG, while the rate from Updated Emission Standard was 41.74%. Eliminating Yellow-label Vehicle scenario had an obvious reduction effect for every single pollutant in the earlier years, but Co-benefits would descent to zero in later by 2020. From the perspective of emission reductions and co-control effect, Updated Emission Standard scenario was best for reducing air pollutants and GHG substantially (tanα=1.43 and Els=1.77). Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutton, Andrew; Shaddock, Daniel A.; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109

    The Laser Interferometer Space Antenna (LISA) will be the first dedicated space based gravitational wave detector. LISA will consist of a triangular formation of spacecraft, forming an interferometer with 5x10{sup 6} km long arms. Annual length variations of the interferometer arms prevent exact laser frequency noise cancellation. Despite prestabilization to an optical cavity the expected frequency noise is many orders of magnitude larger than the required levels. Arm locking is a feedback control method that will further stabilize the laser frequency by referencing it to the 5x10{sup 6} km arms. Although the original arm locking scheme produced a substantial noisemore » reduction, the technique suffered from slowly decaying start-up transients and excess noise at harmonic frequencies of the inverse round-trip time. Dual arm locking, presented here, improves on the original scheme by combining information from two interferometer arms for feedback control. Compared to conventional arm locking, dual arm locking exhibits significantly reduced start-up transients, no noise amplification at frequencies within the LISA signal band, and more than 50 fold improvement in noise suppression at low frequencies. In this article we present a detailed analysis of the dual arm locking control system and present simulation results showing a noise reduction of 10 000 at a frequency of 10 mHz.« less

  12. Application of an efficient hybrid scheme for aeroelastic analysis of advanced propellers

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Sankar, N. L.; Reddy, T. S. R.; Huff, D. L.

    1989-01-01

    An efficient 3-D hybrid scheme is applied for solving Euler equations to analyze advanced propellers. The scheme treats the spanwise direction semi-explicitly and the other two directions implicitly, without affecting the accuracy, as compared to a fully implicit scheme. This leads to a reduction in computer time and memory requirement. The calculated power coefficients for two advanced propellers, SR3 and SR7L, and various advanced ratios showed good correlation with experiment. Spanwise distribution of elemental power coefficient and steady pressure coefficient differences also showed good agreement with experiment. A study of the effect of structural flexibility on the performance of the advanced propellers showed that structural deformation due to centrifugal and aero loading should be included for better correlation.

  13. Parallelized modelling and solution scheme for hierarchically scaled simulations

    NASA Technical Reports Server (NTRS)

    Padovan, Joe

    1995-01-01

    This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.

  14. Watermarking of ultrasound medical images in teleradiology using compressed watermark

    PubMed Central

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    Abstract. The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel’s least significant bits (LSBs). The watermark lossless compression and embedding at pixel’s LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes. PMID:26839914

  15. A Novel High-Efficiency Rear-Contact Solar Cell with Bifacial Sensitivity

    NASA Astrophysics Data System (ADS)

    Hezel, R.

    At present, wafer-based silicon solar cells have a share of more than 90% of the photovoltaic market. Despite rapid growth in the manufacturing volume, accompanied by a significant drop in the module selling price, the high costs currently associated with photovoltaic power generation are one of the most important obstacles to widespread global use of solar electricity. Up to a certain level, a higher production volume is a key driver in cost reduction. However, apart from a drastic reduction of the silicon wafer thickness in conjunction with improved light-trapping schemes, innovative processing sequences combining very high solar cell efficiencies with simple and cost-effective fabrication techniques are needed to become competitive with conventional energy sources and thus to move solar energy from niche to mainstream.

  16. Comparison of the co-gasification of sewage sludge and food wastes and cost-benefit analysis of gasification- and incineration-based waste treatment schemes.

    PubMed

    You, Siming; Wang, Wei; Dai, Yanjun; Tong, Yen Wah; Wang, Chi-Hwa

    2016-10-01

    The compositions of food wastes and their co-gasification producer gas were compared with the existing data of sewage sludge. Results showed that food wastes are more favorable than sewage sludge for co-gasification based on residue generation and energy output. Two decentralized gasification-based schemes were proposed to dispose of the sewage sludge and food wastes in Singapore. Monte Carlo simulation-based cost-benefit analysis was conducted to compare the proposed schemes with the existing incineration-based scheme. It was found that the gasification-based schemes are financially superior to the incineration-based scheme based on the data of net present value (NPV), benefit-cost ratio (BCR), and internal rate of return (IRR). Sensitivity analysis was conducted to suggest effective measures to improve the economics of the schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Effect of a national primary care pay for performance scheme on emergency hospital admissions for ambulatory care sensitive conditions: controlled longitudinal study

    PubMed Central

    Harrison, Mark J; Dusheiko, Mark; Sutton, Matt; Gravelle, Hugh; Doran, Tim

    2014-01-01

    Objective To estimate the impact of a national primary care pay for performance scheme, the Quality and Outcomes Framework in England, on emergency hospital admissions for ambulatory care sensitive conditions (ACSCs). Design Controlled longitudinal study. Setting English National Health Service between 1998/99 and 2010/11. Participants Populations registered with each of 6975 family practices in England. Main outcome measures Year specific differences between trend adjusted emergency hospital admission rates for incentivised ACSCs before and after the introduction of the Quality and Outcomes Framework scheme and two comparators: non-incentivised ACSCs and non-ACSCs. Results Incentivised ACSC admissions showed a relative reduction of 2.7% (95% confidence interval 1.6% to 3.8%) in the first year of the Quality and Outcomes Framework compared with ACSCs that were not incentivised. This increased to a relative reduction of 8.0% (6.9% to 9.1%) in 2010/11. Compared with conditions that are not regarded as being influenced by the quality of ambulatory care (non-ACSCs), incentivised ACSCs also showed a relative reduction in rates of emergency admissions of 2.8% (2.0% to 3.6%) in the first year increasing to 10.9% (10.1% to 11.7%) by 2010/11. Conclusions The introduction of a major national pay for performance scheme for primary care in England was associated with a decrease in emergency admissions for incentivised conditions compared with conditions that were not incentivised. Contemporaneous health service changes seem unlikely to have caused the sharp change in the trajectory of incentivised ACSC admissions immediately after the introduction of the Quality and Outcomes Framework. The decrease seems larger than would be expected from the changes in the process measures that were incentivised, suggesting that the pay for performance scheme may have had impacts on quality of care beyond the directly incentivised activities. PMID:25389120

  18. Ultrafast quantum computation in ultrastrongly coupled circuit QED systems.

    PubMed

    Wang, Yimin; Guo, Chu; Zhang, Guo-Qiang; Wang, Gangcheng; Wu, Chunfeng

    2017-03-10

    The latest technological progress of achieving the ultrastrong-coupling regime in circuit quantum electrodynamics (QED) systems has greatly promoted the developments of quantum physics, where novel quantum optics phenomena and potential computational benefits have been predicted. Here, we propose a scheme to accelerate the nontrivial two-qubit phase gate in a circuit QED system, where superconducting flux qubits are ultrastrongly coupled to a transmission line resonator (TLR), and two more TLRs are coupled to the ultrastrongly-coupled system for assistant. The nontrivial unconventional geometric phase gate between the two flux qubits is achieved based on close-loop displacements of the three-mode intracavity fields. Moreover, as there are three resonators contributing to the phase accumulation, the requirement of the coupling strength to realize the two-qubit gate can be reduced. Further reduction in the coupling strength to achieve a specific controlled-phase gate can be realized by adding more auxiliary resonators to the ultrastrongly-coupled system through superconducting quantum interference devices. We also present a study of our scheme with realistic parameters considering imperfect controls and noisy environment. Our scheme possesses the merits of ultrafastness and noise-tolerance due to the advantages of geometric phases.

  19. Ultrafast quantum computation in ultrastrongly coupled circuit QED systems

    PubMed Central

    Wang, Yimin; Guo, Chu; Zhang, Guo-Qiang; Wang, Gangcheng; Wu, Chunfeng

    2017-01-01

    The latest technological progress of achieving the ultrastrong-coupling regime in circuit quantum electrodynamics (QED) systems has greatly promoted the developments of quantum physics, where novel quantum optics phenomena and potential computational benefits have been predicted. Here, we propose a scheme to accelerate the nontrivial two-qubit phase gate in a circuit QED system, where superconducting flux qubits are ultrastrongly coupled to a transmission line resonator (TLR), and two more TLRs are coupled to the ultrastrongly-coupled system for assistant. The nontrivial unconventional geometric phase gate between the two flux qubits is achieved based on close-loop displacements of the three-mode intracavity fields. Moreover, as there are three resonators contributing to the phase accumulation, the requirement of the coupling strength to realize the two-qubit gate can be reduced. Further reduction in the coupling strength to achieve a specific controlled-phase gate can be realized by adding more auxiliary resonators to the ultrastrongly-coupled system through superconducting quantum interference devices. We also present a study of our scheme with realistic parameters considering imperfect controls and noisy environment. Our scheme possesses the merits of ultrafastness and noise-tolerance due to the advantages of geometric phases. PMID:28281654

  20. Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities.

    PubMed

    Li, Yun; Sjostrom, Marten; Olsson, Roger; Jennehag, Ulf

    2016-01-01

    One of the light field capturing techniques is the focused plenoptic capturing. By placing a microlens array in front of the photosensor, the focused plenoptic cameras capture both spatial and angular information of a scene in each microlens image and across microlens images. The capturing results in a significant amount of redundant information, and the captured image is usually of a large resolution. A coding scheme that removes the redundancy before coding can be of advantage for efficient compression, transmission, and rendering. In this paper, we propose a lossy coding scheme to efficiently represent plenoptic images. The format contains a sparse image set and its associated disparities. The reconstruction is performed by disparity-based interpolation and inpainting, and the reconstructed image is later employed as a prediction reference for the coding of the full plenoptic image. As an outcome of the representation, the proposed scheme inherits a scalable structure with three layers. The results show that plenoptic images are compressed efficiently with over 60 percent bit rate reduction compared with High Efficiency Video Coding intra coding, and with over 20 percent compared with an High Efficiency Video Coding block copying mode.

  1. Error Reduction Program. [combustor performance evaluation codes

    NASA Technical Reports Server (NTRS)

    Syed, S. A.; Chiappetta, L. M.; Gosman, A. D.

    1985-01-01

    The details of a study to select, incorporate and evaluate the best available finite difference scheme to reduce numerical error in combustor performance evaluation codes are described. The combustor performance computer programs chosen were the two dimensional and three dimensional versions of Pratt & Whitney's TEACH code. The criteria used to select schemes required that the difference equations mirror the properties of the governing differential equation, be more accurate than the current hybrid difference scheme, be stable and economical, be compatible with TEACH codes, use only modest amounts of additional storage, and be relatively simple. The methods of assessment used in the selection process consisted of examination of the difference equation, evaluation of the properties of the coefficient matrix, Taylor series analysis, and performance on model problems. Five schemes from the literature and three schemes developed during the course of the study were evaluated. This effort resulted in the incorporation of a scheme in 3D-TEACH which is usuallly more accurate than the hybrid differencing method and never less accurate.

  2. Distinctive features of high-ash bituminuos coals combution with low milling fineness in furnace chambers with bottom blowing

    NASA Astrophysics Data System (ADS)

    Zroychikov, N. A.; Kaverin, A. A.; Biryukov, Ya A.

    2017-11-01

    Nowadays the problem of improvement of pulverized coal combustion schemes is an actual one for national power engineering, especially for combustion of coals with low milling fineness with significant portion of moisture or mineral impurities. In this case a big portion of inert material in the fuel may cause impairment of its ignition and combustion. In addition there are a lot of boiler installations on which nitrogen oxides emission exceeds standard values significantly. Decreasing of milling fineness is not without interest as a way of lowering an electric energy consumption for pulverization, which can reach 30% of power plant’s auxiliary consumption of electricity. Development of a combustion scheme meeting the requirements both for effective coal burning and environmental measures (related to NOx emission) is a complex task and demands compromising between these two factors, because implementation of NOx control by combustion very often leads to rising of carbon-in-ash loss. However widespread occurrence of such modern research technique as computer modeling allows to conduct big amount of variants calculations of combustion schemes with low cost and find an optimum. This paper presents results of numerical research of combined schemes of coal combustion with high portion of inert material based on straight-flow burners and nozzles. Several distinctive features of furnace aerodynamics, heat transfer and combustion has been found. The combined scheme of high-ash bituminouos coals combustion with low milling fineness, which allows effective combustion of pointed type of fuels with nitrogen oxides emission reduction has been proposed.

  3. On the computational aspects of comminution in discrete element method

    NASA Astrophysics Data System (ADS)

    Chaudry, Mohsin Ali; Wriggers, Peter

    2018-04-01

    In this paper, computational aspects of crushing/comminution of granular materials are addressed. For crushing, maximum tensile stress-based criterion is used. Crushing model in discrete element method (DEM) is prone to problems of mass conservation and reduction in critical time step. The first problem is addressed by using an iterative scheme which, depending on geometric voids, recovers mass of a particle. In addition, a global-local framework for DEM problem is proposed which tends to alleviate the local unstable motion of particles and increases the computational efficiency.

  4. A New Scheme of Integrability for (bi)Hamiltonian PDE

    NASA Astrophysics Data System (ADS)

    De Sole, Alberto; Kac, Victor G.; Valeri, Daniele

    2016-10-01

    We develop a new method for constructing integrable Hamiltonian hierarchies of Lax type equations, which combines the fractional powers technique of Gelfand and Dickey, and the classical Hamiltonian reduction technique of Drinfeld and Sokolov. The method is based on the notion of an Adler type matrix pseudodifferential operator and the notion of a generalized quasideterminant. We also introduce the notion of a dispersionless Adler type series, which is applied to the study of dispersionless Hamiltonian equations. Non-commutative Hamiltonian equations are discussed in this framework as well.

  5. Factorization and reduction methods for optimal control of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Powers, R. K.

    1985-01-01

    A Chandrasekhar-type factorization method is applied to the linear-quadratic optimal control problem for distributed parameter systems. An aeroelastic control problem is used as a model example to demonstrate that if computationally efficient algorithms, such as those of Chandrasekhar-type, are combined with the special structure often available to a particular problem, then an abstract approximation theory developed for distributed parameter control theory becomes a viable method of solution. A numerical scheme based on averaging approximations is applied to hereditary control problems. Numerical examples are given.

  6. A study of performance parameters on drag and heat flux reduction efficiency of combinational novel cavity and opposing jet concept in hypersonic flows

    NASA Astrophysics Data System (ADS)

    Sun, Xi-wan; Guo, Zhen-yun; Huang, Wei; Li, Shi-bin; Yan, Li

    2017-02-01

    The drag reduction and thermal protection system applied to hypersonic re-entry vehicles have attracted an increasing attention, and several novel concepts have been proposed by researchers. In the current study, the influences of performance parameters on drag and heat reduction efficiency of combinational novel cavity and opposing jet concept has been investigated numerically. The Reynolds-average Navier-Stokes (RANS) equations coupled with the SST k-ω turbulence model have been employed to calculate its surrounding flowfields, and the first-order spatially accurate upwind scheme appears to be more suitable for three-dimensional flowfields after grid independent analysis. Different cases of performance parameters, namely jet operating conditions, freestream angle of attack and physical dimensions, are simulated based on the verification of numerical method, and the effects on shock stand-off distance, drag force coefficient, surface pressure and heat flux distributions have been analyzed. This is the basic study for drag reduction and thermal protection by multi-objective optimization of the combinational novel cavity and opposing jet concept in hypersonic flows in the future.

  7. PMD compensation in multilevel coded-modulation schemes with coherent detection using BLAST algorithm and iterative polarization cancellation.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-15

    We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.

  8. Quantifying and Reducing Uncertainties in Estimating OMI Tropospheric Column NO2 Trend over The United States

    NASA Astrophysics Data System (ADS)

    Smeltzer, C. D.; Wang, Y.; Boersma, F.; Celarier, E. A.; Bucsela, E. J.

    2013-12-01

    We investigate the effects of retrieval radiation schemes and parameters on trend analysis using tropospheric nitrogen dioxide (NO2) vertical column density (VCD) measurements over the United States. Ozone Monitoring Instrument (OMI) observations from 2005 through 2012 are used in this analysis. We investigated two radiation schemes, provided by National Aeronautics and Space Administration (NASA TOMRAD) and Koninklijk Nederlands Meteorologisch Instituut (KNMI DAK). In addition, we analyzed trend dependence on radiation parameters, including surface albedo and viewing geometry. The cross-track mean VCD average difference is 10-15% between the two radiation schemes in 2005. As the OMI anomaly developed and progressively worsens, the difference between the two schemes becomes larger. Furthermore, applying surface albedo measurements from the Moderate Resolution Imaging Spectroradiometer (MODIS) leads to increases of estimated NO2 VCD trends over high-emission regions. We find that the uncertainties of OMI-derived NO2 VCD trends can be reduced by up to a factor of 3 by selecting OMI cross-track rows on the basis of their performance over the ocean [see abstract figure]. Comparison of OMI tropospheric VCD trends to those estimated based on the EPA surface NO2 observations indicate using MODIS surface albedo data and a more narrow selection of OMI cross-track rows greatly improves the agreement of estimated trends between satellite and surface data. This figure shows the reduction of uncertainty in OMI NO2 trend by selecting OMI cross-track rows based on the performance over the ocean. With this technique, uncertainties within the seasonal trend may be reduced by a factor of 3 or more (blue) compared with only removing the anomalous rows: considering OMI cross-track rows 4-24 (red).

  9. Simplified Antenna Group Determination of RS Overhead Reduced Massive MIMO for Wireless Sensor Networks.

    PubMed

    Lee, Byung Moo

    2017-12-29

    Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices.

  10. Simplified Antenna Group Determination of RS Overhead Reduced Massive MIMO for Wireless Sensor Networks

    PubMed Central

    2017-01-01

    Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices. PMID:29286339

  11. All-solid-state Z-scheme system arrays of Fe2V4O13/RGO/CdS for visible light-driving photocatalytic CO2 reduction into renewable hydrocarbon fuel.

    PubMed

    Li, Ping; Zhou, Yong; Li, Haijin; Xu, Qinfeng; Meng, Xianguang; Meng, Xiangguang; Wang, Xiaoyong; Xiao, Min; Zou, Zhigang

    2015-01-14

    An all-solid-state Z-scheme system array consisting of an Fe2V4O13 nanoribbon (NR)/reduced graphene oxide (RGO)/CdS nanoparticle grown on the stainless-steel mesh was rationally designed for photoconversion of gaseous CO2 into renewable hydrocarbon fuels (methane: CH4).

  12. Functional traits, convergent evolution, and periodic tables of niches.

    PubMed

    Winemiller, Kirk O; Fitzgerald, Daniel B; Bower, Luke M; Pianka, Eric R

    2015-08-01

    Ecology is often said to lack general theories sufficiently predictive for applications. Here, we examine the concept of a periodic table of niches and feasibility of niche classification schemes from functional trait and performance data. Niche differences and their influence on ecological patterns and processes could be revealed effectively by first performing data reduction/ordination analyses separately on matrices of trait and performance data compiled according to logical associations with five basic niche 'dimensions', or aspects: habitat, life history, trophic, defence and metabolic. Resultant patterns then are integrated to produce interpretable niche gradients, ordinations and classifications. Degree of scheme periodicity would depend on degrees of niche conservatism and convergence causing species clustering across multiple niche dimensions. We analysed a sample data set containing trait and performance data to contrast two approaches for producing niche schemes: species ordination within niche gradient space, and niche categorisation according to trait-value thresholds. Creation of niche schemes useful for advancing ecological knowledge and its applications will depend on research that produces functional trait and performance datasets directly related to niche dimensions along with criteria for data standardisation and quality. As larger databases are compiled, opportunities will emerge to explore new methods for data reduction, ordination and classification. © 2015 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  13. Impact of China's essential medicines scheme and zero-mark-up policy on antibiotic prescriptions in county hospitals: a mixed methods study.

    PubMed

    Wei, Xiaolin; Yin, Jia; Walley, John D; Zhang, Zhitong; Hicks, Joseph P; Zhou, Yu; Sun, Qiang; Zeng, Jun; Lin, Mei

    2017-09-01

    To evaluate the impact of the national essential medicines scheme and zero-mark-up policy on antibiotic prescribing behaviour. In rural Guangxi, a natural experiment compared one county hospital which implemented the policy with a comparison hospital which did not. All outpatient and inpatient records in 2011 and 2014 were extracted from the two hospitals. Primary outcome indicator was antibiotic prescribing rate (APR) among children aged 2-14 presenting in outpatients with a primary diagnosis of upper respiratory tract infection (URTI). We organised independent physician reviews to determine inappropriate prescribing for inpatients. Difference-in-difference analyses based on multivariate regressions were used to compare APR over time after adjusting potential confounders. We conducted 12 in-depth interviews with paediatricians, hospital directors and health officials. A total of 8219 and 4142 outpatient prescriptions of childhood URTIs were included in the intervention and comparison hospitals, respectively. In 2011, APR was 30% in the intervention and 88% in the comparison hospital. In 2014, the intervention hospital significantly reduced outpatient APR by 21% (95% CI:-23%, -18%), intravenous infusion by 58% (95% CI: -64%, -52%) and prescription cost by 31 USD (95% CI: -35, -28), compared with the controls. We collected 251 inpatient records, but did not find reductions in inappropriate antibiotic use. Interviews revealed that the intervention hospital implemented a thorough antibiotics stewardship programme containing training, peer review of prescriptions and restrictions for overprescribing. The national essential medicines scheme and zero-mark-up policy, when implemented with an antimicrobial stewardship programme, may be associated with reductions in outpatient antibiotic prescribing and intravenous infusions. © 2017 John Wiley & Sons Ltd.

  14. Reduction of capsule endoscopy reading times by unsupervised image mining.

    PubMed

    Iakovidis, D K; Tsevas, S; Polydorou, A

    2010-09-01

    The screening of the small intestine has become painless and easy with wireless capsule endoscopy (WCE) that is a revolutionary, relatively non-invasive imaging technique performed by a wireless swallowable endoscopic capsule transmitting thousands of video frames per examination. The average time required for the visual inspection of a full 8-h WCE video ranges from 45 to 120min, depending on the experience of the examiner. In this paper, we propose a novel approach to WCE reading time reduction by unsupervised mining of video frames. The proposed methodology is based on a data reduction algorithm which is applied according to a novel scheme for the extraction of representative video frames from a full length WCE video. It can be used either as a video summarization or as a video bookmarking tool, providing the comparative advantage of being general, unbounded by the finiteness of a training set. The number of frames extracted is controlled by a parameter that can be tuned automatically. Comprehensive experiments on real WCE videos indicate that a significant reduction in the reading times is feasible. In the case of the WCE videos used this reduction reached 85% without any loss of abnormalities.

  15. Report on Pairing-based Cryptography.

    PubMed

    Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily

    2015-01-01

    This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST's position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed.

  16. Report on Pairing-based Cryptography

    PubMed Central

    Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily

    2015-01-01

    This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST’s position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed. PMID:26958435

  17. Strategy for reflector pattern calculation - Let the computer do the work

    NASA Technical Reports Server (NTRS)

    Lam, P. T.; Lee, S.-W.; Hung, C. C.; Acosta, R.

    1986-01-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. It is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  18. Strategy for reflector pattern calculation: Let the computer do the work

    NASA Technical Reports Server (NTRS)

    Lam, P. T.; Lee, S. W.; Hung, C. C.; Acousta, R.

    1985-01-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. it is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  19. Phase-Noise and Amplitude-Noise Measurement of Low-Power Signals

    NASA Technical Reports Server (NTRS)

    Rubiola, Enrico; Salik, Ertan; Yu, Nan; Maleki, Lute

    2004-01-01

    Measuring the phase fluctuation between a pair of low-power microwave signals, the signals must be amplified before detection. In such cases the phase noise of the amplifier pair is the main cause of 1/f background noise of the instrument. this article proposes a scheme that makes amplification possible while rejecting the close in 1/f (flicker) noise of the two amplifiers. Noise rejection, which relies upon the understanding of the amplifier noise mechanism does not require averaging. Therefore, our scheme can also be the detector of a closed loop noise reduction system. the first prototype, compared to a traditional saturated mixer system under the same condition, show a 24 dB noise reduction of the 1/f region.

  20. Model and controller reduction of large-scale structures based on projection methods

    NASA Astrophysics Data System (ADS)

    Gildin, Eduardo

    The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that is, the reduced-order controller implemented with the full-order plant. A controller reduction approach is proposed such that to guarantee closed-loop stability. It is based on the concept of dissipativity (or positivity) of linear dynamical systems. Utilizing passivity preserving model reduction together with dissipative-LQG controllers, effective low-order optimal controllers are obtained. Results are shown through simulations.

  1. Developments in photonic and mm-wave component technology for fiber radio

    NASA Astrophysics Data System (ADS)

    Iezekiel, Stavros

    2013-01-01

    A review of photonic component technology for fiber radio applications at 60 GHz will be given. We will focus on two architectures: (i) baseband-over-fiber and (ii) RF-over-fiber. In the first approach, up-conversion to 60 GHz is performed at the picocell base stations, with data being transported over fiber, while in the second both the data and rum­ wave carrier are transported over fiber. For the baseband-over-fiber scheme, we examine techniques to improve the modulation efficiency of directly­ modulated fiber links. These are based on traveling-wave structures applied to series cascades of lasers. This approach combines the improvement in differential quantum efficiency with the ability to tailor impedance matching as required. In addition, we report on various base station transceiver architectures based on optically-controlled :tvfMIC self­ oscillating mixers, and their application to 60 GHz fiber radio. This approach allows low cost optoelectronic transceivers to be used for the baseband fiber link, whilst minimizing the impact of dispersion. For the RF-over-fiber scheme, we report on schemes for optical generation of 100 GHz. These use modulation of a Mach-Zehnder modulator at Vπ bias in cascade with a Mach-Zehnder driven by 1.25 Gb/s data. One of the issues in RF-over-fiber is dispersion, while reduced modulation efficiency due to the presence of the optical carrier is also problematic. We examine the use of silicon nitride micro-ring resonators for the production of optical single sideband modulation in order to combat dispersion, and for the reduction of optical carrier power in order to improve link modulation efficiency.

  2. Routh reduction and Cartan mechanics

    NASA Astrophysics Data System (ADS)

    Capriotti, S.

    2017-04-01

    In the present work a Cartan mechanics version for Routh reduction is considered, as an intermediate step towards Routh reduction in field theory. Motivation for this generalization comes from a scheme for integrable systems (Fehér and Gábor, 2002), used for understanding the occurrence of Toda field theories in so called Hamiltonian reduction of WZNW field theories (Fehér et al., 1992). As a way to accomplish with this intermediate aim, this article also contains a formulation of the Lagrangian Adler-Kostant-Symes systems discussed in Fehér and Gábor (2002) in terms of Routh reduction.

  3. A secure smart-card based authentication and key agreement scheme for telecare medicine information systems.

    PubMed

    Lee, Tian-Fu; Liu, Chuan-Ming

    2013-06-01

    A smart-card based authentication scheme for telecare medicine information systems enables patients, doctors, nurses, health visitors and the medicine information systems to establish a secure communication platform through public networks. Zhu recently presented an improved authentication scheme in order to solve the weakness of the authentication scheme of Wei et al., where the off-line password guessing attacks cannot be resisted. This investigation indicates that the improved scheme of Zhu has some faults such that the authentication scheme cannot execute correctly and is vulnerable to the attack of parallel sessions. Additionally, an enhanced authentication scheme based on the scheme of Zhu is proposed. The enhanced scheme not only avoids the weakness in the original scheme, but also provides users' anonymity and authenticated key agreements for secure data communications.

  4. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme.

    PubMed

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.

  5. ID-based encryption scheme with revocation

    NASA Astrophysics Data System (ADS)

    Othman, Hafizul Azrie; Ismail, Eddie Shahril

    2017-04-01

    In 2015, Meshram proposed an efficient ID-based cryptographic encryption based on the difficulty of solving discrete logarithm and integer-factoring problems. The scheme was pairing free and claimed to be secure against adaptive chosen plaintext attacks (CPA). Later, Tan et al. proved that the scheme was insecure by presenting a method to recover the secret master key and to obtain prime factorization of modulo n. In this paper, we propose a new pairing-free ID-based encryption scheme with revocation based on Meshram's ID-based encryption scheme, which is also secure against Tan et al.'s attacks.

  6. A secure biometrics-based authentication scheme for telecare medicine information systems.

    PubMed

    Yan, Xiaopeng; Li, Weiheng; Li, Ping; Wang, Jiantao; Hao, Xinhong; Gong, Peng

    2013-10-01

    The telecare medicine information system (TMIS) allows patients and doctors to access medical services or medical information at remote sites. Therefore, it could bring us very big convenient. To safeguard patients' privacy, authentication schemes for the TMIS attracted wide attention. Recently, Tan proposed an efficient biometrics-based authentication scheme for the TMIS and claimed their scheme could withstand various attacks. However, in this paper, we point out that Tan's scheme is vulnerable to the Denial-of-Service attack. To enhance security, we also propose an improved scheme based on Tan's work. Security and performance analysis shows our scheme not only could overcome weakness in Tan's scheme but also has better performance.

  7. Energy reduction using multi-channels optical wireless communication based OFDM

    NASA Astrophysics Data System (ADS)

    Darwesh, Laialy; Arnon, Shlomi

    2017-10-01

    In recent years, an increasing number of data center networks (DCNs) have been built to provide various cloud applications. Major challenges in the design of next generation DC networks include reduction of the energy consumption, high flexibility and scalability, high data rates, minimum latency and high cyber security. Use of optical wireless communication (OWC) to augment the DC network could help to confront some of these challenges. In this paper we present an OWC multi channels communication method that could lead to significant energy reduction of the communication equipment. The method is to convert a high speed serial data stream to many slower and parallel streams and vies versa at the receiver. We implement this concept of multi channels using optical orthogonal frequency division multiplexing (O-OFDM) method. In our scheme, we use asymmetrically clipped optical OFDM (ACO-OFDM). Our results show that the realization of multi channels OFDM (ACO-OFDM) methods reduces the total energy consumption exponentially, as the number of channels transmitted through them rises.

  8. Effects of upstream-biased third-order space correction terms on multidimensional Crowley advection schemes

    NASA Technical Reports Server (NTRS)

    Schlesinger, R. E.

    1985-01-01

    The impact of upstream-biased corrections for third-order spatial truncation error on the stability and phase error of the two-dimensional Crowley combined advective scheme with the cross-space term included is analyzed, putting primary emphasis on phase error reduction. The various versions of the Crowley scheme are formally defined, and their stability and phase error characteristics are intercompared using a linear Fourier component analysis patterned after Fromm (1968, 1969). The performances of the schemes under prototype simulation conditions are tested using time-dependent numerical experiments which advect an initially cone-shaped passive scalar distribution in each of three steady nondivergent flows. One such flow is solid rotation, while the other two are diagonal uniform flow and a strongly deformational vortex.

  9. Implicit Block ACK Scheme for IEEE 802.11 WLANs

    PubMed Central

    Sthapit, Pranesh; Pyun, Jae-Young

    2016-01-01

    The throughput of IEEE 802.11 standard is significantly bounded by the associated Medium Access Control (MAC) overhead. Because of the overhead, an upper limit exists for throughput, which is bounded, including situations where data rates are extremely high. Therefore, an overhead reduction is necessary to achieve higher throughput. The IEEE 802.11e amendment introduced the block ACK mechanism, to reduce the number of control messages in MAC. Although the block ACK scheme greatly reduces overhead, further improvements are possible. In this letter, we propose an implicit block ACK method that further reduces the overhead associated with IEEE 802.11e’s block ACK scheme. The mathematical analysis results are presented for both the original protocol and the proposed scheme. A performance improvement of greater than 10% was achieved with the proposed implementation.

  10. An evaluation of costs and benefits of a vehicle periodic inspection scheme with six-monthly inspections compared to annual inspections.

    PubMed

    Keall, Michael D; Newstead, Stuart

    2013-09-01

    Although previous research suggests that safety benefits accrue from periodic vehicle inspection programmes, little consideration has been given to whether the benefits are sufficient to justify the often considerable costs of such schemes. Methodological barriers impede many attempts to evaluate the overall safety benefits of periodic vehicle inspection schemes, including this study, which did not attempt to evaluate the New Zealand warrant of fitness scheme as a whole. Instead, this study evaluated one aspect of the scheme: the effects of doubling the inspection frequency, from annual to biannual, when the vehicle reaches six years of age. In particular, reductions in safety-related vehicle faults were estimated together with the value of the safety benefits compared to the costs. When merged crash data, licensing data and roadworthiness inspection data were analysed, there were estimated to be improvements in injury crash involvement rates and prevalence of safety-related faults of respectively 8% (95% CI 0.4-15%) and 13.5% (95% CI 12.8-14.2%) associated with the increase from annual to 6-monthly inspections. The wide confidence interval for the drop in crash rate shows considerably statistical uncertainty about the precise size of the drop. Even assuming that this proportion of vehicle faults prevented by doubling the inspection frequency could be maintained over the vehicle age range 7-20 years, the safety benefits are very unlikely to exceed the additional costs of the 6-monthly inspections to the motorists, valued at $NZ 500 million annually excluding the overall costs of administering the scheme. The New Zealand warrant of fitness scheme as a whole cannot be robustly evaluated using the analysis approach used here, but the safety benefits would need to be substantial--yielding an unlikely 12% reduction in injury crashes--for benefits to equal costs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Linking payment to health outcomes: a taxonomy and examination of performance-based reimbursement schemes between healthcare payers and manufacturers.

    PubMed

    Carlson, Josh J; Sullivan, Sean D; Garrison, Louis P; Neumann, Peter J; Veenstra, David L

    2010-08-01

    To identify, categorize and examine performance-based health outcomes reimbursement schemes for medical technology. We performed a review of performance-based health outcomes reimbursement schemes over the past 10 years (7/98-010/09) using publicly available databases, web and grey literature searches, and input from healthcare reimbursement experts. We developed a taxonomy of scheme types by inductively organizing the schemes identified according to the timing, execution, and health outcomes measured in the schemes. Our search yielded 34 coverage with evidence development schemes, 10 conditional treatment continuation schemes, and 14 performance-linked reimbursement schemes. The majority of schemes are in Europe and Australia, with an increasing number in Canada and the U.S. These schemes have the potential to alter the reimbursement and pricing landscape for medical technology, but significant challenges, including high transaction costs and insufficient information systems, may limit their long-term impact. Future studies regarding experiences and outcomes of implemented schemes are necessary. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  12. Fuzzy crane control with sensorless payload deflection feedback for vibration reduction

    NASA Astrophysics Data System (ADS)

    Smoczek, Jaroslaw

    2014-05-01

    Different types of cranes are widely used for shifting cargoes in building sites, shipping yards, container terminals and many manufacturing segments where the problem of fast and precise transferring a payload suspended on the ropes with oscillations reduction is frequently important to enhance the productivity, efficiency and safety. The paper presents the fuzzy logic-based robust feedback anti-sway control system which can be applicable either with or without a sensor of sway angle of a payload. The discrete-time control approach is based on the fuzzy interpolation of the controllers and crane dynamic model's parameters with respect to the varying rope length and mass of a payload. The iterative procedure combining a pole placement method and interval analysis of closed-loop characteristic polynomial coefficients is proposed to design the robust control scheme. The sensorless anti-sway control application developed with using PAC system with RX3i controller was verified on the laboratory scaled overhead crane.

  13. Sustainable forest management of tropical forests can reduce carbon emissions and stabilize timber production

    Treesearch

    N. Sasaki; G.P. Asner; Yude Pan; W. Knorr; P.B. Durst; H.O. Ma; I. Abe; A.J. Lowe; L.P. Koh

    2016-01-01

    The REDD+ scheme of the United Nations Framework Conventionon Climate Change has provided opportunities to manage tropical forests for timber production and carbon emission reductions. To determine the appropriate loggingtechniques, we analyzed potential timber production and carbon emission reductions under two logging techniques over a 40-year period of selective...

  14. Automated Reduction and Calibration of SCUBA Archive Data Using ORAC-DR

    NASA Astrophysics Data System (ADS)

    Jenness, T.; Stevens, J. A.; Archibald, E. N.; Economou, F.; Jessop, N.; Robson, E. I.; Tilanus, R. P. J.; Holland, W. S.

    The Submillimetre Common User Bolometer Array (SCUBA) instrument has been operating on the James Clerk Maxwell Telescope (JCMT) since 1997. The data archive is now sufficiently large that it can be used for investigating instrumental properties and the variability of astronomical sources. This paper describes the automated calibration and reduction scheme used to process the archive data with particular emphasis on the pointing observations. This is made possible by using the ORAC-DR data reduction pipeline, a flexible and extensible data reduction pipeline that is used on UKIRT and the JCMT.

  15. Understanding security failures of two authentication and key agreement schemes for telecare medicine information systems.

    PubMed

    Mishra, Dheerendra

    2015-03-01

    Smart card based authentication and key agreement schemes for telecare medicine information systems (TMIS) enable doctors, nurses, patients and health visitors to use smart cards for secure login to medical information systems. In recent years, several authentication and key agreement schemes have been proposed to present secure and efficient solution for TMIS. Most of the existing authentication schemes for TMIS have either higher computation overhead or are vulnerable to attacks. To reduce the computational overhead and enhance the security, Lee recently proposed an authentication and key agreement scheme using chaotic maps for TMIS. Xu et al. also proposed a password based authentication and key agreement scheme for TMIS using elliptic curve cryptography. Both the schemes provide better efficiency from the conventional public key cryptography based schemes. These schemes are important as they present an efficient solution for TMIS. We analyze the security of both Lee's scheme and Xu et al.'s schemes. Unfortunately, we identify that both the schemes are vulnerable to denial of service attack. To understand the security failures of these cryptographic schemes which are the key of patching existing schemes and designing future schemes, we demonstrate the security loopholes of Lee's scheme and Xu et al.'s scheme in this paper.

  16. Robust and efficient biometrics based password authentication scheme for telecare medicine information systems using extended chaotic maps.

    PubMed

    Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Xie, Dong; Yang, Yixian

    2015-06-01

    The Telecare Medicine Information Systems (TMISs) provide an efficient communicating platform supporting the patients access health-care delivery services via internet or mobile networks. Authentication becomes an essential need when a remote patient logins into the telecare server. Recently, many extended chaotic maps based authentication schemes using smart cards for TMISs have been proposed. Li et al. proposed a secure smart cards based authentication scheme for TMISs using extended chaotic maps based on Lee's and Jiang et al.'s scheme. In this study, we show that Li et al.'s scheme has still some weaknesses such as violation the session key security, vulnerability to user impersonation attack and lack of local verification. To conquer these flaws, we propose a chaotic maps and smart cards based password authentication scheme by applying biometrics technique and hash function operations. Through the informal and formal security analyses, we demonstrate that our scheme is resilient possible known attacks including the attacks found in Li et al.'s scheme. As compared with the previous authentication schemes, the proposed scheme is more secure and efficient and hence more practical for telemedical environments.

  17. Reuse of process water in a waste-to-energy plant: An Italian case of study.

    PubMed

    Gardoni, Davide; Catenacci, Arianna; Antonelli, Manuela

    2015-09-01

    The minimisation of water consumption in waste-to-energy (WtE) plants is an outstanding issue, especially in those regions where water supply is critical and withdrawals come from municipal waterworks. Among the various possible solutions, the most general, simple and effective one is the reuse of process water. This paper discusses the effectiveness of two different reuse options in an Italian WtE plant, starting from the analytical characterisation and the flow-rate measurement of fresh water and process water flows derived from each utility internal to the WtE plant (e.g. cooling, bottom ash quenching, flue gas wet scrubbing). This census allowed identifying the possible direct connections that optimise the reuse scheme, avoiding additional water treatments. The effluent of the physical-chemical wastewater treatment plant (WWTP), located in the WtE plant, was considered not adequate to be directly reused because of the possible deposition of mineral salts and clogging potential associated to residual suspended solids. Nevertheless, to obtain high reduction in water consumption, reverse osmosis should be installed to remove non-metallic ions (Cl(-), SO4(2-)) and residual organic and inorganic pollutants. Two efficient solutions were identified. The first, a simple reuse scheme based on a cascade configuration, allowed 45% reduction in water consumption (from 1.81 to 0.99m(3)tMSW(-1), MSW: Municipal Solid Waste) without specific water treatments. The second solution, a cascade configuration with a recycle based on a reverse osmosis process, allowed 74% reduction in water consumption (from 1.81 to 0.46m(3)tMSW(-1)). The results of the present work show that it is possible to reduce the water consumption, and in turn the wastewater production, reducing at the same time the operating cost of the WtE plant. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. On the equivalence of Gaussian elimination and Gauss-Jordan reduction in solving linear equations

    NASA Technical Reports Server (NTRS)

    Tsao, Nai-Kuan

    1989-01-01

    A novel general approach to round-off error analysis using the error complexity concepts is described. This is applied to the analysis of the Gaussian Elimination and Gauss-Jordan scheme for solving linear equations. The results show that the two algorithms are equivalent in terms of our error complexity measures. Thus the inherently parallel Gauss-Jordan scheme can be implemented with confidence if parallel computers are available.

  19. Present state of HDTV coding in Japan and future prospect

    NASA Astrophysics Data System (ADS)

    Murakami, Hitomi

    The development status of HDTV digital codecs in Japan is evaluated; several bit rate-reduction codecs have been developed for 1125 lines/60-field HDTV, and performance trials have been conducted through satellite and optical fiber links. Prospective development efforts will attempt to achieve more efficient coding schemes able to reduce the bit rate to as little as 45 Mbps, as well as to apply coding schemes to automated teller machine networks.

  20. Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps.

    PubMed

    Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han; Lin, Tsung-Hung

    2017-01-01

    A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes.

  1. Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps

    PubMed Central

    Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han

    2017-01-01

    A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes. PMID:28759615

  2. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme

    PubMed Central

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.’s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks. PMID:26866606

  3. A solution to the Navier-Stokes equations based upon the Newton Kantorovich method

    NASA Technical Reports Server (NTRS)

    Davis, J. E.; Gabrielsen, R. E.; Mehta, U. B.

    1977-01-01

    An implicit finite difference scheme based on the Newton-Kantorovich technique was developed for the numerical solution of the nonsteady, incompressible, two-dimensional Navier-Stokes equations in conservation-law form. The algorithm was second-order-time accurate, noniterative with regard to the nonlinear terms in the vorticity transport equation except at the earliest few time steps, and spatially factored. Numerical results were obtained with the technique for a circular cylinder at Reynolds number 15. Results indicate that the technique is in excellent agreement with other numerical techniques for all geometries and Reynolds numbers investigated, and indicates a potential for significant reduction in computation time over current iterative techniques.

  4. Effect of superconducting solenoid model cores on spanwise iron magnet roll control

    NASA Technical Reports Server (NTRS)

    Britcher, C. P.

    1985-01-01

    Compared with conventional ferromagnetic fuselage cores, superconducting solenoid cores appear to offer significant reductions in the projected cost of a large wind tunnel magnetic suspension and balance system. The provision of sufficient magnetic roll torque capability has been a long-standing problem with all magnetic suspension and balance systems; and the spanwise iron magnet scheme appears to be the most powerful system available. This scheme utilizes iron cores which are installed in the wings of the model. It was anticipated that the magnetization of these cores, and hence the roll torque generated, would be affected by the powerful external magnetic field of the superconducting solenoid. A preliminary study has been made of the effect of the superconducting solenoid fuselage model core concept on the spanwise iron magnet roll torque generation schemes. Computed data for one representative configuration indicate that reductions in available roll torque occur over a range of applied magnetic field levels. These results indicate that a 30-percent increase in roll electromagnet capacity over that previously determined will be required for a representative 8-foot wind tunnel magnetic suspension and balance system design.

  5. Read buffer optimizations to support compiler-assisted multiple instruction retry

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Fuchs, W. K.; Hwu, W. M.

    1993-01-01

    Multiple instruction retry is a recovery mechanism for transient processor faults. We previously developed a compiler-assisted approach to multiple instruction ferry in which a read buffer of size 2N (where N represents the maximum instruction rollback distance) was used to resolve some data hazards while the compiler resolved the remaining hazards. The compiler-assisted scheme was shown to reduce the performance overhead and/or hardware complexity normally associated with hardware-only retry schemes. This paper examines the size and design of the read buffer. We establish a practical lower bound and average size requirement for the read buffer by modifying the scheme to save only the data required for rollback. The study measures the effect on the performance of a DECstation 3100 running ten application programs using six read buffer configurations with varying read buffer sizes. Two alternative configurations are shown to be the most efficient and differed depending on whether split-cycle-saves are assumed. Up to a 55 percent read buffer size reduction is achievable with an average reduction of 39 percent given the most efficient read buffer configuration and a variety of applications.

  6. Study on Noise Prediction Model and Control Schemes for Substation

    PubMed Central

    Gao, Yang; Liu, Songtao

    2014-01-01

    With the government's emphasis on environmental issues of power transmission and transformation project, noise pollution has become a prominent problem now. The noise from the working transformer, reactor, and other electrical equipment in the substation will bring negative effect to the ambient environment. This paper focuses on using acoustic software for the simulation and calculation method to control substation noise. According to the characteristics of the substation noise and the techniques of noise reduction, a substation's acoustic field model was established with the SoundPLAN software to predict the scope of substation noise. On this basis, 4 reasonable noise control schemes were advanced to provide some helpful references for noise control during the new substation's design and construction process. And the feasibility and application effect of these control schemes can be verified by using the method of simulation modeling. The simulation results show that the substation always has the problem of excessive noise at boundary under the conventional measures. The excess noise can be efficiently reduced by taking the corresponding noise reduction methods. PMID:24672356

  7. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid–Base and Ligand Binding Equilibria of Aquacobalamin

    DOE PAGES

    Johnston, Ryne C.; Zhou, Jing; Smith, Jeremy C.; ...

    2016-07-08

    In redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. Moreover, a major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co ligand binding equilibrium constants (Kon/off), pKas and reduction potentials for models of aquacobalaminmore » in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for Co III, Co II, and Co I species, respectively, and the second model features saturation of each vacant axial coordination site on Co II and Co I species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pK as and 2.3 log units for two log K on/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co axial ligand binding, leading to substantial errors in predicted pK as and K on/off values. Finally, these findings demonstrate the effectiveness of the present approach for computing electrochemical and thermodynamic properties of a complex transition metal-containing cofactor.« less

  8. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage

    PubMed Central

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model. PMID:27898703

  9. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage.

    PubMed

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model.

  10. A back-fitting algorithm to improve real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaojing; Liu, Pan; Cheng, Lei; Liu, Zhangjun; Zhao, Yan

    2018-07-01

    Real-time flood forecasting is important for decision-making with regards to flood control and disaster reduction. The conventional approach involves a postprocessor calibration strategy that first calibrates the hydrological model and then estimates errors. This procedure can simulate streamflow consistent with observations, but obtained parameters are not optimal. Joint calibration strategies address this issue by refining hydrological model parameters jointly with the autoregressive (AR) model. In this study, five alternative schemes are used to forecast floods. Scheme I uses only the hydrological model, while scheme II includes an AR model for error correction. In scheme III, differencing is used to remove non-stationarity in the error series. A joint inference strategy employed in scheme IV calibrates the hydrological and AR models simultaneously. The back-fitting algorithm, a basic approach for training an additive model, is adopted in scheme V to alternately recalibrate hydrological and AR model parameters. The performance of the five schemes is compared with a case study of 15 recorded flood events from China's Baiyunshan reservoir basin. Our results show that (1) schemes IV and V outperform scheme III during the calibration and validation periods and (2) scheme V is inferior to scheme IV in the calibration period, but provides better results in the validation period. Joint calibration strategies can therefore improve the accuracy of flood forecasting. Additionally, the back-fitting recalibration strategy produces weaker overcorrection and a more robust performance compared with the joint inference strategy.

  11. A provably-secure ECC-based authentication scheme for wireless sensor networks.

    PubMed

    Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho

    2014-11-06

    A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes.

  12. A Provably-Secure ECC-Based Authentication Scheme for Wireless Sensor Networks

    PubMed Central

    Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho

    2014-01-01

    A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes. PMID:25384009

  13. A soft-hard combination-based cooperative spectrum sensing scheme for cognitive radio networks.

    PubMed

    Do, Nhu Tri; An, Beongku

    2015-02-13

    In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT)-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of -15 dB). In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.

  14. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann

    1993-01-01

    A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  15. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann; Usab, William J., Jr.

    1993-01-01

    A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  16. Mechanisms of Phosphorus Acquisition and Lipid Class Remodeling under P Limitation in a Marine Microalga.

    PubMed

    Mühlroth, Alice; Winge, Per; El Assimi, Aimen; Jouhet, Juliette; Maréchal, Eric; Hohmann-Marriott, Martin F; Vadstein, Olav; Bones, Atle M

    2017-12-01

    Molecular mechanisms of phosphorus (P) limitation are of great interest for understanding algal production in aquatic ecosystems. Previous studies point to P limitation-induced changes in lipid composition. As, in microalgae, the molecular mechanisms of this specific P stress adaptation remain unresolved, we reveal a detailed phospholipid-recycling scheme in Nannochloropsis oceanica and describe important P acquisition genes based on highly corresponding transcriptome and lipidome data. Initial responses to P limitation showed increased expression of genes involved in P uptake and an expansion of the P substrate spectrum based on purple acid phosphatases. Increase in P trafficking displayed a rearrangement between compartments by supplying P to the chloroplast and carbon to the cytosol for lipid synthesis. We propose a novel phospholipid-recycling scheme for algae that leads to the rapid reduction of phospholipids and synthesis of the P-free lipid classes. P mobilization through membrane lipid degradation is mediated mainly by two glycerophosphoryldiester phosphodiesterases and three patatin-like phospholipases A on the transcriptome level. To compensate for low phospholipids in exponential growth, N. oceanica synthesized sulfoquinovosyldiacylglycerol and diacylglyceroltrimethylhomoserine. In this study, it was shown that an N. oceanica strain has a unique repertoire of genes that facilitate P acquisition and the degradation of phospholipids compared with other stramenopiles. The novel phospholipid-recycling scheme opens new avenues for metabolic engineering of lipid composition in algae. © 2017 American Society of Plant Biologists. All Rights Reserved.

  17. An optimization model for regional air pollutants mitigation based on the economic structure adjustment and multiple measures: A case study in Urumqi city, China.

    PubMed

    Sun, Xiaowei; Li, Wei; Xie, Yulei; Huang, Guohe; Dong, Changjuan; Yin, Jianguang

    2016-11-01

    A model based on economic structure adjustment and pollutants mitigation was proposed and applied in Urumqi. Best-worst case analysis and scenarios analysis were performed in the model to guarantee the parameters accuracy, and to analyze the effect of changes of emission reduction styles. Results indicated that pollutant-mitigations of electric power industry, iron and steel industry, and traffic relied mainly on technological transformation measures, engineering transformation measures and structure emission reduction measures, respectively; Pollutant-mitigations of cement industry relied mainly on structure emission reduction measures and technological transformation measures; Pollutant-mitigations of thermal industry relied mainly on the four mitigation measures. They also indicated that structure emission reduction was a better measure for pollutants mitigation of Urumqi. Iron and steel industry contributed greatly in SO2, NOx and PM (particulate matters) emission reduction and should be given special attention in pollutants emission reduction. In addition, the scales of iron and steel industry should be reduced with the decrease of SO2 mitigation amounts. The scales of traffic and electric power industry should be reduced with the decrease of NOx mitigation amounts, and the scales of cement industry and iron and steel industry should be reduced with the decrease of PM mitigation amounts. The study can provide references of pollutants mitigation schemes to decision-makers for regional economic and environmental development in the 12th Five-Year Plan on National Economic and Social Development of Urumqi. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. A Novel Passive Tracking Scheme Exploiting Geometric and Intercept Theorems

    PubMed Central

    Zhou, Biao; Sun, Chao; Ahn, Deockhyeon; Kim, Youngok

    2018-01-01

    Passive tracking aims to track targets without assistant devices, that is, device-free targets. Passive tracking based on Radio Frequency (RF) Tomography in wireless sensor networks has recently been addressed as an emerging field. The passive tracking scheme using geometric theorems (GTs) is one of the most popular RF Tomography schemes, because the GT-based method can effectively mitigate the demand for a high density of wireless nodes. In the GT-based tracking scheme, the tracking scenario is considered as a two-dimensional geometric topology and then geometric theorems are applied to estimate crossing points (CPs) of the device-free target on line-of-sight links (LOSLs), which reveal the target’s trajectory information in a discrete form. In this paper, we review existing GT-based tracking schemes, and then propose a novel passive tracking scheme by exploiting the Intercept Theorem (IT). To create an IT-based CP estimation scheme available in the noisy non-parallel LOSL situation, we develop the equal-ratio traverse (ERT) method. Finally, we analyze properties of three GT-based tracking algorithms and the performance of these schemes is evaluated experimentally under various trajectories, node densities, and noisy topologies. Analysis of experimental results shows that tracking schemes exploiting geometric theorems can achieve remarkable positioning accuracy even under rather a low density of wireless nodes. Moreover, the proposed IT scheme can provide generally finer tracking accuracy under even lower node density and noisier topologies, in comparison to other schemes. PMID:29562621

  19. Rectilinear six-dimensional ionization cooling channel for a muon collider: A theoretical and numerical study

    DOE PAGES

    Stratakis, Diktys; Palmer, Robert B.

    2015-03-06

    A Muon Collider requires a reduction of the six-dimensional emittance of the captured muon beam by several orders of magnitude. In this study, we describe a novel rectilinear cooling scheme that should meet this requirement. First, we present the conceptual design of our proposed scheme wherein we detail its basic features. Then, we establish the theoretical framework to predict and evaluate the performance of ionization cooling channels and discuss its application to our specific case. In conclusion, we present the first end-to-end simulation of 6D cooling for a Muon Collider and show a notable reduction of the 6D emittance bymore » five orders of magnitude. We find good agreement between simulation and theory.« less

  20. Massive-training artificial neural network (MTANN) for reduction of false positives in computer-aided detection of polyps: Suppression of rectal tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Kenji; Yoshida, Hiroyuki; Naeppi, Janne

    2006-10-15

    One of the limitations of the current computer-aided detection (CAD) of polyps in CT colonography (CTC) is a relatively large number of false-positive (FP) detections. Rectal tubes (RTs) are one of the typical sources of FPs because a portion of a RT, especially a portion of a bulbous tip, often exhibits a cap-like shape that closely mimics the appearance of a small polyp. Radiologists can easily recognize and dismiss RT-induced FPs; thus, they may lose their confidence in CAD as an effective tool if the CAD scheme generates such ''obvious'' FPs due to RTs consistently. In addition, RT-induced FPs maymore » distract radiologists from less common true positives in the rectum. Therefore, removal RT-induced FPs as well as other types of FPs is desirable while maintaining a high sensitivity in the detection of polyps. We developed a three-dimensional (3D) massive-training artificial neural network (MTANN) for distinction between polyps and RTs in 3D CTC volumetric data. The 3D MTANN is a supervised volume-processing technique which is trained with input CTC volumes and the corresponding ''teaching'' volumes. The teaching volume for a polyp contains a 3D Gaussian distribution, and that for a RT contains zeros for enhancement of polyps and suppression of RTs, respectively. For distinction between polyps and nonpolyps including RTs, a 3D scoring method based on a 3D Gaussian weighting function is applied to the output of the trained 3D MTANN. Our database consisted of CTC examinations of 73 patients, scanned in both supine and prone positions (146 CTC data sets in total), with optical colonoscopy as a reference standard for the presence of polyps. Fifteen patients had 28 polyps, 15 of which were 5-9 mm and 13 were 10-25 mm in size. These CTC cases were subjected to our previously reported CAD scheme that included centerline-based segmentation of the colon, shape-based detection of polyps, and reduction of FPs by use of a Bayesian neural network based on geometric and texture features. Application of this CAD scheme yielded 96.4% (27/28) by-polyp sensitivity with 3.1 (224/73) FPs per patient, among which 20 FPs were caused by RTs. To eliminate the FPs due to RTs and possibly other normal structures, we trained a 3D MTANN with ten representative polyps and ten RTs, and applied the trained 3D MTANN to the above CAD true- and false-positive detections. In the output volumes of the 3D MTANN, polyps were represented by distributions of bright voxels, whereas RTs and other normal structures partly similar to RTs appeared as darker voxels, indicating the ability of the 3D MTANN to suppress RTs as well as other normal structures effectively. Application of the 3D MTANN to the CAD detections showed that the 3D MTANN eliminated all RT-induced 20 FPs, as well as 53 FPs due to other causes, without removal of any true positives. Overall, the 3D MTANN was able to reduce the FP rate of the CAD scheme from 3.1 to 2.1 FPs per patient (33% reduction), while the original by-polyp sensitivity of 96.4% was maintained.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovchinnikov, Mikhail; Ackerman, Andrew; Avramov, Alex

    Large-eddy simulations of mixed-phase Arctic clouds by 11 different models are analyzed with the goal of improving understanding and model representation of processes controlling the evolution of these clouds. In a case based on observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC), it is found that ice number concentration, Ni, exerts significant influence on the cloud structure. Increasing Ni leads to a substantial reduction in liquid water path (LWP) and potential cloud dissipation, in agreement with earlier studies. By comparing simulations with the same microphysics coupled to different dynamical cores as well as the same dynamics coupled to differentmore » microphysics schemes, it is found that the ice water path (IWP) is mainly controlled by ice microphysics, while the inter-model differences in LWP are largely driven by physics and numerics of the dynamical cores. In contrast to previous intercomparisons, all models here use the same ice particle properties (i.e., mass-size, mass-fall speed, and mass-capacitance relationships) and a common radiation parameterization. The constrained setup exposes the importance of ice particle size distributions (PSD) in influencing cloud evolution. A clear separation in LWP and IWP predicted by models with bin and bulk microphysical treatments is documented and attributed primarily to the assumed shape of ice PSD used in bulk schemes. Compared to the bin schemes that explicitly predict the PSD, schemes assuming exponential ice PSD underestimate ice growth by vapor deposition and overestimate mass-weighted fall speed leading to an underprediction of IWP by a factor of two in the considered case.« less

  2. An Improved Biometrics-Based Remote User Authentication Scheme with User Anonymity

    PubMed Central

    Kumari, Saru

    2013-01-01

    The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability. PMID:24350272

  3. An improved biometrics-based remote user authentication scheme with user anonymity.

    PubMed

    Khan, Muhammad Khurram; Kumari, Saru

    2013-01-01

    The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability.

  4. A secure and efficient chaotic map-based authenticated key agreement scheme for telecare medicine information systems.

    PubMed

    Mishra, Dheerendra; Srinivas, Jangirala; Mukhopadhyay, Sourav

    2014-10-01

    Advancement in network technology provides new ways to utilize telecare medicine information systems (TMIS) for patient care. Although TMIS usually faces various attacks as the services are provided over the public network. Recently, Jiang et al. proposed a chaotic map-based remote user authentication scheme for TMIS. Their scheme has the merits of low cost and session key agreement using Chaos theory. It enhances the security of the system by resisting various attacks. In this paper, we analyze the security of Jiang et al.'s scheme and demonstrate that their scheme is vulnerable to denial of service attack. Moreover, we demonstrate flaws in password change phase of their scheme. Further, our aim is to propose a new chaos map-based anonymous user authentication scheme for TMIS to overcome the weaknesses of Jiang et al.'s scheme, while also retaining the original merits of their scheme. We also show that our scheme is secure against various known attacks including the attacks found in Jiang et al.'s scheme. The proposed scheme is comparable in terms of the communication and computational overheads with Jiang et al.'s scheme and other related existing schemes. Moreover, we demonstrate the validity of the proposed scheme through the BAN (Burrows, Abadi, and Needham) logic.

  5. Reduction of false-positive recalls using a computerized mammographic image feature analysis scheme

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-08-01

    The high false-positive recall rate is one of the major dilemmas that significantly reduce the efficacy of screening mammography, which harms a large fraction of women and increases healthcare cost. This study aims to investigate the feasibility of helping reduce false-positive recalls by developing a new computer-aided diagnosis (CAD) scheme based on the analysis of global mammographic texture and density features computed from four-view images. Our database includes full-field digital mammography (FFDM) images acquired from 1052 recalled women (669 positive for cancer and 383 benign). Each case has four images: two craniocaudal (CC) and two mediolateral oblique (MLO) views. Our CAD scheme first computed global texture features related to the mammographic density distribution on the segmented breast regions of four images. Second, the computed features were given to two artificial neural network (ANN) classifiers that were separately trained and tested in a ten-fold cross-validation scheme on CC and MLO view images, respectively. Finally, two ANN classification scores were combined using a new adaptive scoring fusion method that automatically determined the optimal weights to assign to both views. CAD performance was tested using the area under a receiver operating characteristic curve (AUC). The AUC = 0.793  ±  0.026 was obtained for this four-view CAD scheme, which was significantly higher at the 5% significance level than the AUCs achieved when using only CC (p = 0.025) or MLO (p = 0.0004) view images, respectively. This study demonstrates that a quantitative assessment of global mammographic image texture and density features could provide useful and/or supplementary information to classify between malignant and benign cases among the recalled cases, which may eventually help reduce the false-positive recall rate in screening mammography.

  6. A multistage approach to improve performance of computer-aided detection of pulmonary embolisms depicted on CT images: preliminary investigation.

    PubMed

    Park, Sang Cheol; Chapman, Brian E; Zheng, Bin

    2011-06-01

    This study developed a computer-aided detection (CAD) scheme for pulmonary embolism (PE) detection and investigated several approaches to improve CAD performance. In the study, 20 computed tomography examinations with various lung diseases were selected, which include 44 verified PE lesions. The proposed CAD scheme consists of five basic steps: 1) lung segmentation; 2) PE candidate extraction using an intensity mask and tobogganing region growing; 3) PE candidate feature extraction; 4) false-positive (FP) reduction using an artificial neural network (ANN); and 5) a multifeature-based k-nearest neighbor for positive/negative classification. In this study, we also investigated the following additional methods to improve CAD performance: 1) grouping 2-D detected features into a single 3-D object; 2) selecting features with a genetic algorithm (GA); and 3) limiting the number of allowed suspicious lesions to be cued in one examination. The results showed that 1) CAD scheme using tobogganing, an ANN, and grouping method achieved the maximum detection sensitivity of 79.2%; 2) the maximum scoring method achieved the superior performance over other scoring fusion methods; 3) GA was able to delete "redundant" features and further improve CAD performance; and 4) limiting the maximum number of cued lesions in an examination reduced FP rate by 5.3 times. Combining these approaches, CAD scheme achieved 63.2% detection sensitivity with 18.4 FP lesions per examination. The study suggested that performance of CAD schemes for PE detection depends on many factors that include 1) optimizing the 2-D region grouping and scoring methods; 2) selecting the optimal feature set; and 3) limiting the number of allowed cueing lesions per examination.

  7. Research to Assembly Scheme for Satellite Deck Based on Robot Flexibility Control Principle

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Hu, Ruiqin; Xiao, Zhengyi; Zhao, Jingjing; Fang, Zhikai

    2018-03-01

    Deck assembly is critical quality control point in final satellite assembly process, and cable extrusion and structure collision problems in assembly process will affect development quality and progress of satellite directly. Aimed at problems existing in deck assembly process, assembly project scheme for satellite deck based on robot flexibility control principle is proposed in this paper. Scheme is introduced firstly; secondly, key technologies on end force perception and flexible docking control in the scheme are studied; then, implementation process of assembly scheme for satellite deck is described in detail; finally, actual application case of assembly scheme is given. Result shows that compared with traditional assembly scheme, assembly scheme for satellite deck based on robot flexibility control principle has obvious advantages in work efficiency, reliability and universality aspects etc.

  8. A keyword searchable attribute-based encryption scheme with attribute update for cloud storage.

    PubMed

    Wang, Shangping; Ye, Jian; Zhang, Yaling

    2018-01-01

    Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption.

  9. A keyword searchable attribute-based encryption scheme with attribute update for cloud storage

    PubMed Central

    Wang, Shangping; Zhang, Yaling

    2018-01-01

    Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption. PMID:29795577

  10. Exponential Arithmetic Based Self-Healing Group Key Distribution Scheme with Backward Secrecy under the Resource-Constrained Wireless Networks

    PubMed Central

    Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun

    2016-01-01

    In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550

  11. An Automated Scheme for the Large-Scale Survey of Herbig-Haro Objects

    NASA Astrophysics Data System (ADS)

    Deng, Licai; Yang, Ji; Zheng, Zhongyuan; Jiang, Zhaoji

    2001-04-01

    Owing to their spectral properties, Herbig-Haro (HH) objects can be discovered using photometric methods through a combination of filters, sampling the characteristic spectral lines and the nearby continuum. The data are commonly processed through direct visual inspection of the images. To make data reduction more efficient and the results more uniform and complete, an automated searching scheme for HH objects is developed to manipulate the images using IRAF. This approach helps to extract images with only intrinsic HH emissions. By using this scheme, the pointlike stellar sources and extended nebulous sources with continuum emission can be eliminated from the original images. The objects with only characteristic HH emission become prominent and can be easily picked up. In this paper our scheme is illustrated by a sample field and has been applied to our surveys for HH objects.

  12. The Effect of Funding Scheme on the Performance of Navy Repair Activities

    DTIC Science & Technology

    2005-03-01

    and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project... managed to wedge in a thesis review along with his fiscal year end frenzy. The author would also like to thank his wonderful wife, Rita, for the extra...a funding scheme called Navy Working Capital Fund while others in the same region were under the Resource Management System (also referred to as

  13. Allocation and simulation study of carbon emission quotas among China's provinces in 2020.

    PubMed

    Zhou, Xing; Guan, Xueling; Zhang, Ming; Zhou, Yao; Zhou, Meihua

    2017-03-01

    China will form its carbon market in 2017 to focus on the allocation of regional carbon emission quota in order to cope with global warming. The rationality of the regional allocation has become an important consideration for the government in ensuring stable growth in different regions that are experiencing disparity in resource endowment and economic status. Based on constructing the quota allocation indicator system for carbon emission, the emission quota for each province in different scenarios and schemes in 2020 is simulated by the multifactor hybrid weighted Shannon entropy allocation model. The following conclusions are drawn: (1) The top 5 secondary-level indicators that influence provincial quota allocation in weight are as follows: per capita energy consumption, openness, per capita carbon emission, per capita disposable income, and energy intensity. (2) The ratio of carbon emission in 2020 is different from that in 2013 in many scenarios, and the variation is scenario 2 > scenario 1 > scenario 3, with Hubei and Guangdong the provinces with the largest increase and decrease ratios, respectively. (3) In the same scenario, the quota allocation varies in different reduction criteria emphases; if the government emphasizes reduction efficiency, scheme 1 will show obvious adjustment, that is, Hunan, Hubei, Guizhou, and Yunnan will have the largest decrease. The amounts are 4.28, 8.31, 4.04, and 5.97 million tons, respectively.

  14. Deep learning methods for CT image-domain metal artifact reduction

    NASA Astrophysics Data System (ADS)

    Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Shan, Hongming; Claus, Bernhard; Jin, Yannan; De Man, Bruno; Wang, Ge

    2017-09-01

    Artifacts resulting from metal objects have been a persistent problem in CT images over the last four decades. A common approach to overcome their effects is to replace corrupt projection data with values synthesized from an interpolation scheme or by reprojection of a prior image. State-of-the-art correction methods, such as the interpolation- and normalization-based algorithm NMAR, often do not produce clinically satisfactory results. Residual image artifacts remain in challenging cases and even new artifacts can be introduced by the interpolation scheme. Metal artifacts continue to be a major impediment, particularly in radiation and proton therapy planning as well as orthopedic imaging. A new solution to the long-standing metal artifact reduction (MAR) problem is deep learning, which has been successfully applied to medical image processing and analysis tasks. In this study, we combine a convolutional neural network (CNN) with the state-of-the-art NMAR algorithm to reduce metal streaks in critical image regions. Training data was synthesized from CT simulation scans of a phantom derived from real patient images. The CNN is able to map metal-corrupted images to artifact-free monoenergetic images to achieve additional correction on top of NMAR for improved image quality. Our results indicate that deep learning is a novel tool to address CT reconstruction challenges, and may enable more accurate tumor volume estimation for radiation therapy planning.

  15. Two-loop matching factors for light quark masses and three-loop mass anomalous dimensions in the regularization invariant symmetric momentum-subtraction schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almeida, Leandro G.; Physics Department, Brookhaven National Laboratory, Upton, New York 11973; Sturm, Christian

    2010-09-01

    Light quark masses can be determined through lattice simulations in regularization invariant momentum-subtraction (RI/MOM) schemes. Subsequently, matching factors, computed in continuum perturbation theory, are used in order to convert these quark masses from a RI/MOM scheme to the MS scheme. We calculate the two-loop corrections in QCD to these matching factors as well as the three-loop mass anomalous dimensions for the RI/SMOM and RI/SMOM{sub {gamma}{sub {mu}} }schemes. These two schemes are characterized by a symmetric subtraction point. Providing the conversion factors in the two different schemes allows for a better understanding of the systematic uncertainties. The two-loop expansion coefficients ofmore » the matching factors for both schemes turn out to be small compared to the traditional RI/MOM schemes. For n{sub f}=3 quark flavors they are about 0.6%-0.7% and 2%, respectively, of the leading order result at scales of about 2 GeV. Therefore, they will allow for a significant reduction of the systematic uncertainty of light quark mass determinations obtained through this approach. The determination of these matching factors requires the computation of amputated Green's functions with the insertions of quark bilinear operators. As a by-product of our calculation we also provide the corresponding results for the tensor operator.« less

  16. Two-loop matching factors for light quark masses and three-loop mass anomalous dimensions in the RI/SMOM schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturm, C.; Almeida, L.

    2010-04-26

    Light quark masses can be determined through lattice simulations in regularization invariant momentum-subtraction (RI/MOM) schemes. Subsequently, matching factors, computed in continuum perturbation theory, are used in order to convert these quark masses from a RI/MOM scheme to the {ovr MS} scheme. We calculate the two-loop corrections in QCD to these matching factors as well as the three-loop mass anomalous dimensions for the RI/SMOM and RI/SMOM{sub {gamma}{mu}} schemes. These two schemes are characterized by a symmetric subtraction point. Providing the conversion factors in the two different schemes allows for a better understanding of the systematic uncertainties. The two-loop expansion coefficients ofmore » the matching factors for both schemes turn out to be small compared to the traditional RI/MOM schemes. For n{sub f} = 3 quark flavors they are about 0.6%-0.7% and 2%, respectively, of the leading order result at scales of about 2 GeV. Therefore, they will allow for a significant reduction of the systematic uncertainty of light quark mass determinations obtained through this approach. The determination of these matching factors requires the computation of amputated Green's functions with the insertions of quark bilinear operators. As a by-product of our calculation we also provide the corresponding results for the tensor operator.« less

  17. An Integrated Programmable Wide-range PLL for Switching Synchronization in Isolated DC-DC Converters

    NASA Astrophysics Data System (ADS)

    Fard, Miad

    In this thesis, two Phase-Locked-Loop (PLL) based synchronization schemes are introduced and applied to a bi-directional Dual-Active-Bridge (DAB) dc-dc converter with an input voltage up to 80 V switching in the range of 250 kHz to 1 MHz. The two schemes synchronize gating signals across an isolated boundary without the need for an isolator per transistor. The Power Transformer Sensing (PTS) method utilizes the DAB power transformer to indirectly sense switching on the secondary side of the boundary, while the Digital Isolator Sensing (DIS) method utilizes a miniature transformer for synchronization and communication at up to 100 MHz. The PLL is implemented on-chip, and is used to control an external DAB power-stage. This work will lead to lower cost, high-frequency isolated dc-dc converters needed for a wide variety of emerging low power applications where isolator cost is relatively high and there is a demand for the reduction of parts.

  18. Integrated treatment of molasses distillery wastewater using microfiltration (MF).

    PubMed

    Basu, Subhankar; Mukherjee, Sangeeta; Kaushik, Ankita; Batra, Vidya S; Balakrishnan, Malini

    2015-08-01

    To achieve zero-liquid discharge, high pressure reverse osmosis (RO) of effluent is being employed by molasses based alcohol distilleries. Low pressure and thus less energy intensive microfiltration (MF) is well established for particulate separation but is not suitable for removal of dissolved organics and color. This work investigates two schemes incorporating MF for molasses distillery wastewater (a) chemical coagulation followed by treatment in a membrane bioreactor (MBR) using MF and (b) electrocoagulation followed by MF. The performance was assessed in terms of COD and color reduction; the conversion of the generated sludge into a zeolite desiccant was also examined. A comparison of the schemes indicates electrocoagulation followed by MF through a 0.1 μm membrane to be most effective. By hydrothermal treatment, electrocoagulated sludge can be transformed into a porous NaX zeolite with a surface area of 86 m(2)/g, which is comparable to commercial desiccants. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Automated detection of retinal nerve fiber layer defects on fundus images: false positive reduction based on vessel likelihood

    NASA Astrophysics Data System (ADS)

    Muramatsu, Chisako; Ishida, Kyoko; Sawada, Akira; Hatanaka, Yuji; Yamamoto, Tetsuya; Fujita, Hiroshi

    2016-03-01

    Early detection of glaucoma is important to slow down or cease progression of the disease and for preventing total blindness. We have previously proposed an automated scheme for detection of retinal nerve fiber layer defect (NFLD), which is one of the early signs of glaucoma observed on retinal fundus images. In this study, a new multi-step detection scheme was included to improve detection of subtle and narrow NFLDs. In addition, new features were added to distinguish between NFLDs and blood vessels, which are frequent sites of false positives (FPs). The result was evaluated with a new test dataset consisted of 261 cases, including 130 cases with NFLDs. Using the proposed method, the initial detection rate was improved from 82% to 98%. At the sensitivity of 80%, the number of FPs per image was reduced from 4.25 to 1.36. The result indicates the potential usefulness of the proposed method for early detection of glaucoma.

  20. Low-complexity peak-to-average power ratio reduction scheme for flip-orthogonal frequency division multiplexing visible light communication system based on μ-law mapping

    NASA Astrophysics Data System (ADS)

    Wang, Jianping; Zhang, Peiran; Lu, Huimin; Feng, LiFang

    2017-06-01

    An orthogonal frequency division multiplexing (OFDM) technique called flipped OFDM (flip-OFDM) is apposite for a visible light communication system that needs the transmitted signal to be real and positive. Flip-OFDM uses two consecutive OFDM subframes to transmit the positive and negative parts of the signal. However, peak-to-average power ratio (PAPR) for flip-OFDM is increased tremendously due to the low value of total average power that arises from many zero values in both the positive and flipped frames. We first analyze the performance of flip-OFDM and perform a comparison with the conventional DC-biased OFDM (DCO-OFDM); then we propose a flip-OFDM scheme combined with μ-law mapping to reduce the high PAPR. The simulation results show that the PAPR of the system is reduced about 17.2 and 5.9 dB when compared with the normal flip-OFDM and DCO-OFDM signals, respectively.

  1. Identifying sensitive ranges in global warming precipitation change dependence on convective parameters

    DOE PAGES

    Bernstein, Diana N.; Neelin, J. David

    2016-04-28

    A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less

  2. Identifying sensitive ranges in global warming precipitation change dependence on convective parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, Diana N.; Neelin, J. David

    A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less

  3. Transmitter Spatial Diversity for FSO Uplink in Presence of Atmospheric Turbulence and Weather Conditions for Different IM Schemes

    NASA Astrophysics Data System (ADS)

    Viswanath, Anjitha; Kumar Jain, Virander; Kar, Subrat

    2017-12-01

    We investigate the error performance of an earth-to-satellite free space optical uplink using transmitter spatial diversity in presence of turbulence and weather conditions, using gamma-gamma distribution and Beer-Lambert law, respectively, for on-off keying (OOK), M-ary pulse position modulation (M-PPM) and M-ary differential PPM (M-DPPM) schemes. Weather conditions such as moderate, light and thin fog cause additional degradation, while dense or thick fog and clouds may lead to link failure. The bit error rate reduces with increase in the number of transmitters for all the schemes. However, beyond a certain number of transmitters, the reduction becomes marginal. Diversity gain remains almost constant for various weather conditions but increases with increase in ground-level turbulence or zenith angle. Further, the number of transmitters required to improve the performance to a desired level is less for M-PPM scheme than M-DPPM and OOK schemes.

  4. Connecting nitrogenase intermediates with the kinetic scheme for N2 reduction by a relaxation protocol and identification of the N2 binding state

    PubMed Central

    Lukoyanov, Dmitriy; Barney, Brett M.; Dean, Dennis R.; Seefeldt, Lance C.; Hoffman, Brian M.

    2007-01-01

    A major obstacle to understanding the reduction of N2 to NH3 by nitrogenase has been the impossibility of synchronizing electron delivery to the MoFe protein for generation of specific enzymatic intermediates. When an intermediate is trapped without synchronous electron delivery, the number of electrons, n, it has accumulated is unknown. Consequently, the intermediate is untethered from kinetic schemes for reduction, which are indexed by n. We show that a trapped intermediate itself provides a “synchronously prepared” initial state, and its relaxation to the resting state at 253 K, conditions that prevent electron delivery to MoFe protein, can be analyzed to reveal n and the nature of the relaxation reactions. The approach is applied to the “H+/H− intermediate” (A) that appears during turnover both in the presence and absence of N2 substrate. A exhibits an S = ½ EPR signal from the active-site iron–molybdenum cofactor (FeMo-co) to which are bound at least two hydrides/protons. A undergoes two-step relaxation to the resting state (C): A → B → C, where B has an S = 3/2 FeMo-co. Both steps show large solvent kinetic isotope effects: KIE ≈ 3–4 (85% D2O). In the context of the Lowe–Thorneley kinetic scheme for N2 reduction, these results provide powerful evidence that H2 is formed in both relaxation steps, that A is the catalytically central state that is activated for N2 binding by the accumulation of n = 4 electrons, and that B has accumulated n = 2 electrons. PMID:17251348

  5. FeynArts model file for MSSM transition counterterms from DREG to DRED

    NASA Astrophysics Data System (ADS)

    Stöckinger, Dominik; Varšo, Philipp

    2012-02-01

    The FeynArts model file MSSMdreg2dred implements MSSM transition counterterms which can convert one-loop Green functions from dimensional regularization to dimensional reduction. They correspond to a slight extension of the well-known Martin/Vaughn counterterms, specialized to the MSSM, and can serve also as supersymmetry-restoring counterterms. The paper provides full analytic results for the counterterms and gives one- and two-loop usage examples. The model file can simplify combining MS¯-parton distribution functions with supersymmetric renormalization or avoiding the renormalization of ɛ-scalars in dimensional reduction. Program summaryProgram title:MSSMdreg2dred.mod Catalogue identifier: AEKR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL-License [1] No. of lines in distributed program, including test data, etc.: 7600 No. of bytes in distributed program, including test data, etc.: 197 629 Distribution format: tar.gz Programming language: Mathematica, FeynArts Computer: Any, capable of running Mathematica and FeynArts Operating system: Any, with running Mathematica, FeynArts installation Classification: 4.4, 5, 11.1 Subprograms used: Cat Id Title Reference ADOW_v1_0 FeynArts CPC 140 (2001) 418 Nature of problem: The computation of one-loop Feynman diagrams in the minimal supersymmetric standard model (MSSM) requires regularization. Two schemes, dimensional regularization and dimensional reduction are both common but have different pros and cons. In order to combine the advantages of both schemes one would like to easily convert existing results from one scheme into the other. Solution method: Finite counterterms are constructed which correspond precisely to the one-loop scheme differences for the MSSM. They are provided as a FeynArts [2] model file. Using this model file together with FeynArts, the (ultra-violet) regularization of any MSSM one-loop Green function is switched automatically from dimensional regularization to dimensional reduction. In particular the counterterms serve as supersymmetry-restoring counterterms for dimensional regularization. Restrictions: The counterterms are restricted to the one-loop level and the MSSM. Running time: A few seconds to generate typical Feynman graphs with FeynArts.

  6. An expandable crosstalk reduction method for inline fiber Fabry-Pérot sensor array based on fiber Bragg gratings

    NASA Astrophysics Data System (ADS)

    Jiang, Peng; Ma, Lina; Hu, Zhengliang; Hu, Yongming

    2016-07-01

    The inline time division multiplexing (TDM) fiber Fabry-Pérot (FFP) sensor array based on fiber Bragg gratings (FBGs) is attractive for many applications. But the intrinsic multi-reflection (MR) induced crosstalk limits applications especially those needing high resolution. In this paper we proposed an expandable method for MR-induced crosstalk reduction. The method is based on complexing-exponent synthesis using the phase-generated carrier (PGC) scheme and the special common character of the impulse responses. The method could promote demodulation stability simultaneously with the reduction of MR-induced crosstalk. A polarization-maintaining 3-TDM experimental system with an FBG reflectivity of about 5 % was set up to validate the method. The experimental results showed that crosstalk reduction of 13 dB and 15 dB was achieved for sensor 2 and sensor 3 respectively when a signal was applied to the first sensor and crosstalk reduction of 8 dB was achieved for sensor 3 when a signal was applied to sensor 2. The demodulation stability of the applied signal was promoted as well. The standard deviations of the amplitude distributions of the demodulated signals were reduced from 0.0046 to 0.0021 for sensor 2 and from 0.0114 to 0.0044 for sensor 3. Because of the convenience of the linear operation of the complexing-exponent and according to the common character of the impulse response we found, the method can be effectively extended to the array with more TDM channels if the impulse response of the inline FFP sensor array with more TDM channels is derived. It offers potential to develop a low-crosstalk inline FFP sensor array using the PGC interrogation technique with relatively high reflectivity FBGs which can guarantee enough light power received by the photo-detector.

  7. Gender recognition from unconstrained and articulated human body.

    PubMed

    Wu, Qin; Guo, Guodong

    2014-01-01

    Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition.

  8. Gender Recognition from Unconstrained and Articulated Human Body

    PubMed Central

    Wu, Qin; Guo, Guodong

    2014-01-01

    Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition. PMID:24977203

  9. Evaluation of the impact of 2 years of a dosing intervention on canine echinococcosis in the Alay Valley, Kyrgyzstan.

    PubMed

    VAN Kesteren, F; Mastin, A; Torgerson, P R; Mytynova, Bermet; Craig, P S

    2017-09-01

    Echinococcosis is a re-emerging zoonotic disease in Kyrgyzstan. In 2012, an echinococcosis control scheme was started that included dosing owned dogs in the Alay Valley, Kyrgyzstan with praziquantel. Control programmes require large investments of money and resources; as such it is important to evaluate how well these are meeting their targets. However, problems associated with echinococcosis control schemes include remoteness and semi-nomadic customs of affected communities, and lack of resources. These same problems apply to control scheme evaluations, and quick and easy assessment tools are highly desirable. Lot quality assurance sampling was used to assess the impact of approximately 2 years of echinococcosis control in the Alay valley. A pre-intervention coproELISA prevalence was established, and a 75% threshold for dosing compliance was set based on previous studies. Ten communities were visited in 2013 and 2014, with 18-21 dogs sampled per community, and questionnaires administered to dog owners. After 21 months of control efforts, 8/10 communities showed evidence of reaching the 75% praziquantel dosing target, although only 3/10 showed evidence of a reduction in coproELISA prevalence. This is understandable, since years of sustained control are required to effectively control echinococcosis, and efforts in the Alay valley should be and are being continued.

  10. Laser line illumination scheme allowing the reduction of background signal and the correction of absorption heterogeneities effects for fluorescence reflectance imaging.

    PubMed

    Fantoni, Frédéric; Hervé, Lionel; Poher, Vincent; Gioux, Sylvain; Mars, Jérôme I; Dinten, Jean-Marc

    2015-10-01

    Intraoperative fluorescence imaging in reflectance geometry is an attractive imaging modality as it allows to noninvasively monitor the fluorescence targeted tumors located below the tissue surface. Some drawbacks of this technique are the background fluorescence decreasing the contrast and absorption heterogeneities leading to misinterpretations concerning fluorescence concentrations. We propose a correction technique based on a laser line scanning illumination scheme. We scan the medium with the laser line and acquire, at each position of the line, both fluorescence and excitation images. We then use the finding that there is a relationship between the excitation intensity profile and the background fluorescence one to predict the amount of signal to subtract from the fluorescence images to get a better contrast. As the light absorption information is contained both in fluorescence and excitation images, this method also permits us to correct the effects of absorption heterogeneities. This technique has been validated on simulations and experimentally. Fluorescent inclusions are observed in several configurations at depths ranging from 1 mm to 1 cm. Results obtained with this technique are compared with those obtained with a classical wide-field detection scheme for contrast enhancement and with the fluorescence by an excitation ratio approach for absorption correction.

  11. Unified implicit kinetic scheme for steady multiscale heat transfer based on the phonon Boltzmann transport equation

    NASA Astrophysics Data System (ADS)

    Zhang, Chuang; Guo, Zhaoli; Chen, Songze

    2017-12-01

    An implicit kinetic scheme is proposed to solve the stationary phonon Boltzmann transport equation (BTE) for multiscale heat transfer problem. Compared to the conventional discrete ordinate method, the present method employs a macroscopic equation to accelerate the convergence in the diffusive regime. The macroscopic equation can be taken as a moment equation for phonon BTE. The heat flux in the macroscopic equation is evaluated from the nonequilibrium distribution function in the BTE, while the equilibrium state in BTE is determined by the macroscopic equation. These two processes exchange information from different scales, such that the method is applicable to the problems with a wide range of Knudsen numbers. Implicit discretization is implemented to solve both the macroscopic equation and the BTE. In addition, a memory reduction technique, which is originally developed for the stationary kinetic equation, is also extended to phonon BTE. Numerical comparisons show that the present scheme can predict reasonable results both in ballistic and diffusive regimes with high efficiency, while the memory requirement is on the same order as solving the Fourier law of heat conduction. The excellent agreement with benchmark and the rapid converging history prove that the proposed macro-micro coupling is a feasible solution to multiscale heat transfer problems.

  12. An improved biometrics-based authentication scheme for telecare medical information systems.

    PubMed

    Guo, Dianli; Wen, Qiaoyan; Li, Wenmin; Zhang, Hua; Jin, Zhengping

    2015-03-01

    Telecare medical information system (TMIS) offers healthcare delivery services and patients can acquire their desired medical services conveniently through public networks. The protection of patients' privacy and data confidentiality are significant. Very recently, Mishra et al. proposed a biometrics-based authentication scheme for telecare medical information system. Their scheme can protect user privacy and is believed to resist a range of network attacks. In this paper, we analyze Mishra et al.'s scheme and identify that their scheme is insecure to against known session key attack and impersonation attack. Thereby, we present a modified biometrics-based authentication scheme for TMIS to eliminate the aforementioned faults. Besides, we demonstrate the completeness of the proposed scheme through BAN-logic. Compared to the related schemes, our protocol can provide stronger security and it is more practical.

  13. Development of a stationary carbon emission inventory for Shanghai using pollution source census data

    NASA Astrophysics Data System (ADS)

    Li, Xianzhe; Jiang, Ping; Zhang, Yan; Ma, Weichun

    2016-12-01

    This study utilizes 521,631 activity data points from the 2007 Shanghai Pollution Source Census to compile a stationary carbon emission inventory for Shanghai. The inventory generated from our dataset shows that a large portion of Shanghai's total energy use consists of coal-oriented energy consumption. The electricity and heat production industries, iron and steel mills, and the petroleum refining industry are the main carbon emitters. In addition, most of these industries are located in Baoshan District, which is Shanghai's largest contributor of carbon emissions. Policy makers can use the enterpriselevel carbon emission inventory and the method designed in this study to construct sound carbon emission reduction policies. The carbon trading scheme to be established in Shanghai based on the developed carbon inventory is also introduced in this paper with the aim of promoting the monitoring, reporting and verification of carbon trading. Moreover, we believe that it might be useful to consider the participation of industries, such as those for food processing, beverage, and tobacco, in Shanghai's carbon trading scheme. Based on the results contained herein, we recommend establishing a comprehensive carbon emission inventory by inputting data from the pollution source census used in this study.

  14. Creating a Taxonomy of Local Boards of Health Based on Local Health Departments’ Perspectives

    PubMed Central

    Shah, Gulzar H.; Sotnikov, Sergey; Leep, Carolyn J.; Ye, Jiali; Van Wave, Timothy W.

    2017-01-01

    Objectives To develop a local board of health (LBoH) classification scheme and empirical definitions to provide a coherent framework for describing variation in the LBoHs. Methods This study is based on data from the 2015 Local Board of Health Survey, conducted among a nationally representative sample of local health department administrators, with 394 responses. The classification development consisted of the following steps: (1) theoretically guided initial domain development, (2) mapping of the survey variables to the proposed domains, (3) data reduction using principal component analysis and group consensus, and (4) scale development and testing for internal consistency. Results The final classification scheme included 60 items across 6 governance function domains and an additional domain—LBoH characteristics and strengths, such as meeting frequency, composition, and diversity of information sources. Application of this classification strongly supports the premise that LBoHs differ in their performance of governance functions and in other characteristics. Conclusions The LBoH taxonomy provides an empirically tested standardized tool for classifying LBoHs from the viewpoint of local health department administrators. Future studies can use this taxonomy to better characterize the impact of LBoHs. PMID:27854524

  15. Wideband optical sensing using pulse interferometry.

    PubMed

    Rosenthal, Amir; Razansky, Daniel; Ntziachristos, Vasilis

    2012-08-13

    Advances in fabrication of high-finesse optical resonators hold promise for the development of miniaturized, ultra-sensitive, wide-band optical sensors, based on resonance-shift detection. Many potential applications are foreseen for such sensors, among them highly sensitive detection in ultrasound and optoacoustic imaging. Traditionally, sensor interrogation is performed by tuning a narrow linewidth laser to the resonance wavelength. Despite the ubiquity of this method, its use has been mostly limited to lab conditions due to its vulnerability to environmental factors and the difficulty of multiplexing - a key factor in imaging applications. In this paper, we develop a new optical-resonator interrogation scheme based on wideband pulse interferometry, potentially capable of achieving high stability against environmental conditions without compromising sensitivity. Additionally, the method can enable multiplexing several sensors. The unique properties of the pulse-interferometry interrogation approach are studied theoretically and experimentally. Methods for noise reduction in the proposed scheme are presented and experimentally demonstrated, while the overall performance is validated for broadband optical detection of ultrasonic fields. The achieved sensitivity is equivalent to the theoretical limit of a 6 MHz narrow-line width laser, which is 40 times higher than what can be usually achieved by incoherent interferometry for the same optical resonator.

  16. Assurance of energy efficiency and data security for ECG transmission in BASNs.

    PubMed

    Ma, Tao; Shrestha, Pradhumna Lal; Hempel, Michael; Peng, Dongming; Sharif, Hamid; Chen, Hsiao-Hwa

    2012-04-01

    With the technological advancement in body area sensor networks (BASNs), low cost high quality electrocardiographic (ECG) diagnosis systems have become important equipment for healthcare service providers. However, energy consumption and data security with ECG systems in BASNs are still two major challenges to tackle. In this study, we investigate the properties of compressed ECG data for energy saving as an effort to devise a selective encryption mechanism and a two-rate unequal error protection (UEP) scheme. The proposed selective encryption mechanism provides a simple and yet effective security solution for an ECG sensor-based communication platform, where only one percent of data is encrypted without compromising ECG data security. This part of the encrypted data is essential to ECG data quality due to its unequally important contribution to distortion reduction. The two-rate UEP scheme achieves a significant additional energy saving due to its unequal investment of communication energy to the outcomes of the selective encryption, and thus, it maintains a high ECG data transmission quality. Our results show the improvements in communication energy saving of about 40%, and demonstrate a higher transmission quality and security measured in terms of wavelet-based weighted percent root-mean-squared difference.

  17. Adjacent Channel Interference Reduction for M-WiMAX TDD and WCDMA FDD Coexistence by Utilizing Beamforming in M-WiMAX TDD System

    NASA Astrophysics Data System (ADS)

    Wang, Yupeng; Chang, Kyunghi

    In this paper, we analyze the coexistence issues of M-WiMAX TDD and WCDMA FDD systems. Smart antenna techniques are applied to mitigate the performance loss induced by adjacent channel interference (ACI) in the scenarios where performance is heavily degraded. In addition, an ACI model is proposed to capture the effect of transmit beamforming at the M-WiMAX base station. Furthermore, a MCS-based throughput analysis is proposed, to jointly consider the effects of ACI, system packet error rate requirement, and the available modulation and coding schemes, which is not possible by using the conventional Shannon equation based analysis. From the results, we find that the proposed MCS-based analysis method is quite suitable to analyze the system theoretical throughput in a practical manner.

  18. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  19. Parallelised photoacoustic signal acquisition using a Fabry-Perot sensor and a camera-based interrogation scheme

    NASA Astrophysics Data System (ADS)

    Saeb Gilani, T.; Villringer, C.; Zhang, E.; Gundlach, H.; Buchmann, J.; Schrader, S.; Laufer, J.

    2018-02-01

    Tomographic photoacoustic (PA) images acquired using a Fabry-Perot (FP) based scanner offer high resolution and image fidelity but can result in long acquisition times due to the need for raster scanning. To reduce the acquisition times, a parallelised camera-based PA signal detection scheme is developed. The scheme is based on using a sCMOScamera and FPI sensors with high homogeneity of optical thickness. PA signals were acquired using the camera-based setup and the signal to noise ratio (SNR) was measured. A comparison of the SNR of PA signal detected using 1) a photodiode in a conventional raster scanning detection scheme and 2) a sCMOS camera in parallelised detection scheme is made. The results show that the parallelised interrogation scheme has the potential to provide high speed PA imaging.

  20. Surface reconstruction and deformation monitoring of stratospheric airship based on laser scanning technology

    NASA Astrophysics Data System (ADS)

    Guo, Kai; Xie, Yongjie; Ye, Hu; Zhang, Song; Li, Yunfei

    2018-04-01

    Due to the uncertainty of stratospheric airship's shape and the security problem caused by the uncertainty, surface reconstruction and surface deformation monitoring of airship was conducted based on laser scanning technology and a √3-subdivision scheme based on Shepard interpolation was developed. Then, comparison was conducted between our subdivision scheme and the original √3-subdivision scheme. The result shows our subdivision scheme could reduce the shrinkage of surface and the number of narrow triangles. In addition, our subdivision scheme could keep the sharp features. So, surface reconstruction and surface deformation monitoring of airship could be conducted precisely by our subdivision scheme.

  1. Planning a Target Renewable Portfolio using Atmospheric Modeling and Stochastic Optimization

    NASA Astrophysics Data System (ADS)

    Hart, E.; Jacobson, M. Z.

    2009-12-01

    A number of organizations have suggested that an 80% reduction in carbon emissions by 2050 is a necessary step to mitigate climate change and that decarbonization of the electricity sector is a crucial component of any strategy to meet this target. Integration of large renewable and intermittent generators poses many new problems in power system planning. In this study, we attempt to determine an optimal portfolio of renewable resources to meet best the fluctuating California load while also meeting an 80% carbon emissions reduction requirement. A stochastic optimization scheme is proposed that is based on a simplified model of the California electricity grid. In this single-busbar power system model, the load is met with generation from wind, solar thermal, photovoltaic, hydroelectric, geothermal, and natural gas plants. Wind speeds and insolation are calculated using GATOR-GCMOM, a global-through-urban climate-weather-air pollution model. Fields were produced for California and Nevada at 21km SN by 14 km WE spatial resolution every 15 minutes for the year 2006. Load data for 2006 were obtained from the California ISO OASIS database. Maximum installed capacities for wind and solar thermal generation were determined using a GIS analysis of potential development sites throughout the state. The stochastic optimization scheme requires that power balance be achieved in a number of meteorological and load scenarios that deviate from the forecasted (or modeled) data. By adjusting the error distributions of the forecasts, the model describes how improvements in wind speed and insolation forecasting may affect the optimal renewable portfolio. Using a simple model, we describe the diversity, size, and sensitivities of a renewable portfolio that is best suited to the resources and needs of California and that contributes significantly to reduction of the state’s carbon emissions.

  2. Modification of near-wall coherent structures in polymer drag reduced flow: simulation

    NASA Astrophysics Data System (ADS)

    Dubief, Yves; White, Christopher; Shaqfeh, Eric; Moin, Parviz; Lele, Sanjiva

    2002-11-01

    Polymer drag reduced flows are investigated through direct numerical simulations of viscoelastic flows. The solver for the viscoelastic model (FENE-P) is based on higher-order finite difference schemes and a novel implicit time integration method. Its robustness allows the simulation of all drag reduction (DR) regimes from the onset to the maximum drag reduction (MDR). It also permits the use of realistic polymer length and concentration. The maximum polymer extension in our simulation matches that of a polystyrene molecule of 10^6 molecular weight. Two distinct regimes of polymer drag reduced flows are observed: at low drag reduction (LDR, DR< 40-50%), the near-wall structure is essentially similar to Newtonian wall turbulence whereas the high drag reduction regime (HDR, DR from 40-50% to MDR) shows significant differences in the organization of the coherent structures. The 3D information provided by numerical simulations allows the determination of the interaction of polymers and near-wall coherent structures. To isolate the contribution of polymers in the viscous sublayer, the buffer and the outer region of the flow, numerical experiments are performed where the polymer concentration is varied in the wall-normal direction. Finally a mechanism of polymer drag reduction derived from our results and PIV measurements is discussed.

  3. An Identity-Based Anti-Quantum Privacy-Preserving Blind Authentication in Wireless Sensor Networks.

    PubMed

    Zhu, Hongfei; Tan, Yu-An; Zhu, Liehuang; Wang, Xianmin; Zhang, Quanxin; Li, Yuanzhang

    2018-05-22

    With the development of wireless sensor networks, IoT devices are crucial for the Smart City; these devices change people's lives such as e-payment and e-voting systems. However, in these two systems, the state-of-art authentication protocols based on traditional number theory cannot defeat a quantum computer attack. In order to protect user privacy and guarantee trustworthy of big data, we propose a new identity-based blind signature scheme based on number theorem research unit lattice, this scheme mainly uses a rejection sampling theorem instead of constructing a trapdoor. Meanwhile, this scheme does not depend on complex public key infrastructure and can resist quantum computer attack. Then we design an e-payment protocol using the proposed scheme. Furthermore, we prove our scheme is secure in the random oracle, and satisfies confidentiality, integrity, and non-repudiation. Finally, we demonstrate that the proposed scheme outperforms the other traditional existing identity-based blind signature schemes in signing speed and verification speed, outperforms the other lattice-based blind signature in signing speed, verification speed, and signing secret key size.

  4. An Identity-Based Anti-Quantum Privacy-Preserving Blind Authentication in Wireless Sensor Networks

    PubMed Central

    Zhu, Hongfei; Tan, Yu-an; Zhu, Liehuang; Wang, Xianmin; Zhang, Quanxin; Li, Yuanzhang

    2018-01-01

    With the development of wireless sensor networks, IoT devices are crucial for the Smart City; these devices change people’s lives such as e-payment and e-voting systems. However, in these two systems, the state-of-art authentication protocols based on traditional number theory cannot defeat a quantum computer attack. In order to protect user privacy and guarantee trustworthy of big data, we propose a new identity-based blind signature scheme based on number theorem research unit lattice, this scheme mainly uses a rejection sampling theorem instead of constructing a trapdoor. Meanwhile, this scheme does not depend on complex public key infrastructure and can resist quantum computer attack. Then we design an e-payment protocol using the proposed scheme. Furthermore, we prove our scheme is secure in the random oracle, and satisfies confidentiality, integrity, and non-repudiation. Finally, we demonstrate that the proposed scheme outperforms the other traditional existing identity-based blind signature schemes in signing speed and verification speed, outperforms the other lattice-based blind signature in signing speed, verification speed, and signing secret key size. PMID:29789475

  5. Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, B.

    2004-01-01

    The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux derivatives and a fourth-order Runge-Kutta method are denoted.

  6. A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme

    NASA Astrophysics Data System (ADS)

    Ghoman, Satyajit S.

    The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of fitness-driven retention. This strategy capitalizes on the advantages of evolutionary algorithm as well as POD-based reduced order modeling, while overcoming the shortcomings inherent with these techniques. When linked with M3 DOE, this strategy offers a computationally efficient methodology for problems with high level of complexity and a challenging design-space. This newly developed framework is demonstrated for its robustness on a nonconventional supersonic tailless air vehicle wing shape optimization problem.

  7. sdg boson model in the SU(3) scheme

    NASA Astrophysics Data System (ADS)

    Akiyama, Yoshimi

    1985-02-01

    Basic properties of the interacting boson model with s-, d- and g-bosons are investigated in rotational nuclei. An SU(3)-seniority scheme is found for the classification of physically important states according to a group reduction chain U(15) ⊃ SU(3). The capability of describing rotational bands increases enormously in comparison with the ordinary sd interacting boson model. The sdg boson model is shown to be able to describe the so-called anharmonicity effect recently observed in the 168Er nucleus.

  8. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.

  9. Error function attack of chaos synchronization based encryption schemes.

    PubMed

    Wang, Xingang; Zhan, Meng; Lai, C-H; Gang, Hu

    2004-03-01

    Different chaos synchronization based encryption schemes are reviewed and compared from the practical point of view. As an efficient cryptanalysis tool for chaos encryption, a proposal based on the error function attack is presented systematically and used to evaluate system security. We define a quantitative measure (quality factor) of the effective applicability of a chaos encryption scheme, which takes into account the security, the encryption speed, and the robustness against channel noise. A comparison is made of several encryption schemes and it is found that a scheme based on one-way coupled chaotic map lattices performs outstandingly well, as judged from quality factor. Copyright 2004 American Institute of Physics.

  10. Security analysis and enhancements of an effective biometric-based remote user authentication scheme using smart cards.

    PubMed

    An, Younghwa

    2012-01-01

    Recently, many biometrics-based user authentication schemes using smart cards have been proposed to improve the security weaknesses in user authentication system. In 2011, Das proposed an efficient biometric-based remote user authentication scheme using smart cards that can provide strong authentication and mutual authentication. In this paper, we analyze the security of Das's authentication scheme, and we have shown that Das's authentication scheme is still insecure against the various attacks. Also, we proposed the enhanced scheme to remove these security problems of Das's authentication scheme, even if the secret information stored in the smart card is revealed to an attacker. As a result of security analysis, we can see that the enhanced scheme is secure against the user impersonation attack, the server masquerading attack, the password guessing attack, and the insider attack and provides mutual authentication between the user and the server.

  11. Security Analysis and Enhancements of an Effective Biometric-Based Remote User Authentication Scheme Using Smart Cards

    PubMed Central

    An, Younghwa

    2012-01-01

    Recently, many biometrics-based user authentication schemes using smart cards have been proposed to improve the security weaknesses in user authentication system. In 2011, Das proposed an efficient biometric-based remote user authentication scheme using smart cards that can provide strong authentication and mutual authentication. In this paper, we analyze the security of Das's authentication scheme, and we have shown that Das's authentication scheme is still insecure against the various attacks. Also, we proposed the enhanced scheme to remove these security problems of Das's authentication scheme, even if the secret information stored in the smart card is revealed to an attacker. As a result of security analysis, we can see that the enhanced scheme is secure against the user impersonation attack, the server masquerading attack, the password guessing attack, and the insider attack and provides mutual authentication between the user and the server. PMID:22899887

  12. An efficient chaotic maps-based authentication and key agreement scheme using smartcards for telecare medicine information systems.

    PubMed

    Lee, Tian-Fu

    2013-12-01

    A smartcard-based authentication and key agreement scheme for telecare medicine information systems enables patients, doctors, nurses and health visitors to use smartcards for secure login to medical information systems. Authorized users can then efficiently access remote services provided by the medicine information systems through public networks. Guo and Chang recently improved the efficiency of a smartcard authentication and key agreement scheme by using chaotic maps. Later, Hao et al. reported that the scheme developed by Guo and Chang had two weaknesses: inability to provide anonymity and inefficient double secrets. Therefore, Hao et al. proposed an authentication scheme for telecare medicine information systems that solved these weaknesses and improved performance. However, a limitation in both schemes is their violation of the contributory property of key agreements. This investigation discusses these weaknesses and proposes a new smartcard-based authentication and key agreement scheme that uses chaotic maps for telecare medicine information systems. Compared to conventional schemes, the proposed scheme provides fewer weaknesses, better security, and more efficiency.

  13. Nanocoaxes for Optical and Electronic Devices

    PubMed Central

    Rizal, Binod; Merlo, Juan M.; Burns, Michael J.; Chiles, Thomas C.; Naughton, Michael J.

    2014-01-01

    The evolution of micro/nanoelectronics technology, including the shrinking of devices and integrated circuit components, has included the miniaturization of linear and coaxial structures to micro/nanoscale dimensions. This reduction in the size of coaxial structures may offer advantages to existing technologies and benefit the exploration and development of new technologies. The reduction in the size of coaxial structures has been realized with various permutations between metals, semiconductors and dielectrics for the core, shield, and annulus. This review will focus on fabrication schemes of arrays of metal – nonmetal – metal nanocoax structures using non-template and template methods, followed by possible applications. The performance and scientific advantages associated with nanocoax-based optical devices including waveguides, negative refractive index materials, light emitting diodes, and photovoltaics are presented. In addition, benefits and challenges that accrue from the application of novel nanocoax structures in energy storage, electronic and sensing devices are summarized. PMID:25279400

  14. An Ultra-wideband and Polarization-independent Metasurface for RCS Reduction

    PubMed Central

    Su, Pei; Zhao, Yongjiu; Jia, Shengli; Shi, Wenwen; Wang, Hongli

    2016-01-01

    In this paper, an ultra-wideband and polarization-independent metasurface for radar cross section (RCS) reduction is proposed. The unit cell of the metasurface operates in a linear cross-polarization scheme in a broad band. The phase and amplitude of cross-polarized reflection can be separately controlled by its geometry and rotation angle. Based on the diffuse reflection theory, a 3-bit coding metasurface is designed to reduce the RCS in an ultra-wide band. The wideband property of the metasurface benefits from the wideband cross polarization conversion and flexible phase modulation. In addition, the polarization-independent feature of the metasurface is achieved by tailoring the rotation angle of each element. Both the simulated and measured results demonstrate that the proposed metasurface can reduce the RCS significantly in an ultra-wide frequency band for both normal and oblique incidences, which makes it promising in the applications such as electromagnetic cloaking. PMID:26864084

  15. An Ultra-wideband and Polarization-independent Metasurface for RCS Reduction.

    PubMed

    Su, Pei; Zhao, Yongjiu; Jia, Shengli; Shi, Wenwen; Wang, Hongli

    2016-02-11

    In this paper, an ultra-wideband and polarization-independent metasurface for radar cross section (RCS) reduction is proposed. The unit cell of the metasurface operates in a linear cross-polarization scheme in a broad band. The phase and amplitude of cross-polarized reflection can be separately controlled by its geometry and rotation angle. Based on the diffuse reflection theory, a 3-bit coding metasurface is designed to reduce the RCS in an ultra-wide band. The wideband property of the metasurface benefits from the wideband cross polarization conversion and flexible phase modulation. In addition, the polarization-independent feature of the metasurface is achieved by tailoring the rotation angle of each element. Both the simulated and measured results demonstrate that the proposed metasurface can reduce the RCS significantly in an ultra-wide frequency band for both normal and oblique incidences, which makes it promising in the applications such as electromagnetic cloaking.

  16. Spectral Quasi-Equilibrium Manifold for Chemical Kinetics.

    PubMed

    Kooshkbaghi, Mahdi; Frouzakis, Christos E; Boulouchos, Konstantinos; Karlin, Iliya V

    2016-05-26

    The Spectral Quasi-Equilibrium Manifold (SQEM) method is a model reduction technique for chemical kinetics based on entropy maximization under constraints built by the slowest eigenvectors at equilibrium. The method is revisited here and discussed and validated through the Michaelis-Menten kinetic scheme, and the quality of the reduction is related to the temporal evolution and the gap between eigenvalues. SQEM is then applied to detailed reaction mechanisms for the homogeneous combustion of hydrogen, syngas, and methane mixtures with air in adiabatic constant pressure reactors. The system states computed using SQEM are compared with those obtained by direct integration of the detailed mechanism, and good agreement between the reduced and the detailed descriptions is demonstrated. The SQEM reduced model of hydrogen/air combustion is also compared with another similar technique, the Rate-Controlled Constrained-Equilibrium (RCCE). For the same number of representative variables, SQEM is found to provide a more accurate description.

  17. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. All-Particle Multiscale Computation of Hypersonic Rarefied Flow

    NASA Astrophysics Data System (ADS)

    Jun, E.; Burt, J. M.; Boyd, I. D.

    2011-05-01

    This study examines a new hybrid particle scheme used as an alternative means of multiscale flow simulation. The hybrid particle scheme employs the direct simulation Monte Carlo (DSMC) method in rarefied flow regions and the low diffusion (LD) particle method in continuum flow regions. The numerical procedures of the low diffusion particle method are implemented within an existing DSMC algorithm. The performance of the LD-DSMC approach is assessed by studying Mach 10 nitrogen flow over a sphere with a global Knudsen number of 0.002. The hybrid scheme results show good overall agreement with results from standard DSMC and CFD computation. Subcell procedures are utilized to improve computational efficiency and reduce sensitivity to DSMC cell size in the hybrid scheme. This makes it possible to perform the LD-DSMC simulation on a much coarser mesh that leads to a significant reduction in computation time.

  19. Lightweight ECC based RFID authentication integrated with an ID verifier transfer protocol.

    PubMed

    He, Debiao; Kumar, Neeraj; Chilamkurti, Naveen; Lee, Jong-Hyouk

    2014-10-01

    The radio frequency identification (RFID) technology has been widely adopted and being deployed as a dominant identification technology in a health care domain such as medical information authentication, patient tracking, blood transfusion medicine, etc. With more and more stringent security and privacy requirements to RFID based authentication schemes, elliptic curve cryptography (ECC) based RFID authentication schemes have been proposed to meet the requirements. However, many recently published ECC based RFID authentication schemes have serious security weaknesses. In this paper, we propose a new ECC based RFID authentication integrated with an ID verifier transfer protocol that overcomes the weaknesses of the existing schemes. A comprehensive security analysis has been conducted to show strong security properties that are provided from the proposed authentication scheme. Moreover, the performance of the proposed authentication scheme is analyzed in terms of computational cost, communicational cost, and storage requirement.

  20. Searchable attribute-based encryption scheme with attribute revocation in cloud storage.

    PubMed

    Wang, Shangping; Zhao, Duqiao; Zhang, Yaling

    2017-01-01

    Attribute based encryption (ABE) is a good way to achieve flexible and secure access control to data, and attribute revocation is the extension of the attribute-based encryption, and the keyword search is an indispensable part for cloud storage. The combination of both has an important application in the cloud storage. In this paper, we construct a searchable attribute-based encryption scheme with attribute revocation in cloud storage, the keyword search in our scheme is attribute based with access control, when the search succeeds, the cloud server returns the corresponding cipher text to user and the user can decrypt the cipher text definitely. Besides, our scheme supports multiple keywords search, which makes the scheme more practical. Under the assumption of decisional bilinear Diffie-Hellman exponent (q-BDHE) and decisional Diffie-Hellman (DDH) in the selective security model, we prove that our scheme is secure.

  1. Simple scheme to implement decoy-state reference-frame-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Chunmei; Zhu, Jianrong; Wang, Qin

    2018-06-01

    We propose a simple scheme to implement decoy-state reference-frame-independent quantum key distribution (RFI-QKD), where signal states are prepared in Z, X, and Y bases, decoy states are prepared in X and Y bases, and vacuum states are set to no bases. Different from the original decoy-state RFI-QKD scheme whose decoy states are prepared in Z, X and Y bases, in our scheme decoy states are only prepared in X and Y bases, which avoids the redundancy of decoy states in Z basis, saves the random number consumption, simplifies the encoding device of practical RFI-QKD systems, and makes the most of the finite pulses in a short time. Numerical simulations show that, considering the finite size effect with reasonable number of pulses in practical scenarios, our simple decoy-state RFI-QKD scheme exhibits at least comparable or even better performance than that of the original decoy-state RFI-QKD scheme. Especially, in terms of the resistance to the relative rotation of reference frames, our proposed scheme behaves much better than the original scheme, which has great potential to be adopted in current QKD systems.

  2. Work, Train, Win: Work-Based Learning Design and Management for Productivity Gains. OECD Education Working Papers, No. 135

    ERIC Educational Resources Information Center

    Kis, Viktoria

    2016-01-01

    Realising the potential of work-based learning schemes as a driver of productivity requires careful design and support. The length of work-based learning schemes should be adapted to the profile of productivity gains. A scheme that is too long for a given skill set might be unattractive for learners and waste public resources, but a scheme that is…

  3. Cryptanalysis and improvement of Yan et al.'s biometric-based authentication scheme for telecare medicine information systems.

    PubMed

    Mishra, Dheerendra; Mukhopadhyay, Sourav; Chaturvedi, Ankita; Kumari, Saru; Khan, Muhammad Khurram

    2014-06-01

    Remote user authentication is desirable for a Telecare Medicine Information System (TMIS) for the safety, security and integrity of transmitted data over the public channel. In 2013, Tan presented a biometric based remote user authentication scheme and claimed that his scheme is secure. Recently, Yan et al. demonstrated some drawbacks in Tan's scheme and proposed an improved scheme to erase the drawbacks of Tan's scheme. We analyze Yan et al.'s scheme and identify that their scheme is vulnerable to off-line password guessing attack, and does not protect anonymity. Moreover, in their scheme, login and password change phases are inefficient to identify the correctness of input where inefficiency in password change phase can cause denial of service attack. Further, we design an improved scheme for TMIS with the aim to eliminate the drawbacks of Yan et al.'s scheme.

  4. An Improvement of Robust and Efficient Biometrics Based Password Authentication Scheme for Telecare Medicine Information Systems Using Extended Chaotic Maps.

    PubMed

    Moon, Jongho; Choi, Younsung; Kim, Jiye; Won, Dongho

    2016-03-01

    Recently, numerous extended chaotic map-based password authentication schemes that employ smart card technology were proposed for Telecare Medical Information Systems (TMISs). In 2015, Lu et al. used Li et al.'s scheme as a basis to propose a password authentication scheme for TMISs that is based on biometrics and smart card technology and employs extended chaotic maps. Lu et al. demonstrated that Li et al.'s scheme comprises some weaknesses such as those regarding a violation of the session-key security, a vulnerability to the user impersonation attack, and a lack of local verification. In this paper, however, we show that Lu et al.'s scheme is still insecure with respect to issues such as a violation of the session-key security, and that it is vulnerable to both the outsider attack and the impersonation attack. To overcome these drawbacks, we retain the useful properties of Lu et al.'s scheme to propose a new password authentication scheme that is based on smart card technology and requires the use of chaotic maps. Then, we show that our proposed scheme is more secure and efficient and supports security properties.

  5. A privacy preserving secure and efficient authentication scheme for telecare medical information systems.

    PubMed

    Mishra, Raghavendra; Barnwal, Amit Kumar

    2015-05-01

    The Telecare medical information system (TMIS) presents effective healthcare delivery services by employing information and communication technologies. The emerging privacy and security are always a matter of great concern in TMIS. Recently, Chen at al. presented a password based authentication schemes to address the privacy and security. Later on, it is proved insecure against various active and passive attacks. To erase the drawbacks of Chen et al.'s anonymous authentication scheme, several password based authentication schemes have been proposed using public key cryptosystem. However, most of them do not present pre-smart card authentication which leads to inefficient login and password change phases. To present an authentication scheme with pre-smart card authentication, we present an improved anonymous smart card based authentication scheme for TMIS. The proposed scheme protects user anonymity and satisfies all the desirable security attributes. Moreover, the proposed scheme presents efficient login and password change phases where incorrect input can be quickly detected and a user can freely change his password without server assistance. Moreover, we demonstrate the validity of the proposed scheme by utilizing the widely-accepted BAN (Burrows, Abadi, and Needham) logic. The proposed scheme is also comparable in terms of computational overheads with relevant schemes.

  6. The effect of load reductions on repetition performance for commonly performed multijoint resistance exercises.

    PubMed

    Willardson, Jeffrey M; Simão, Roberto; Fontana, Fabio E

    2012-11-01

    The purpose of this study was to compare 4 different loading schemes for the free weight bench press, wide grip front lat pull-down, and free weight back squat to determine the extent of progressive load reductions necessary to maintain repetition performance. Thirty-two recreationally trained women (age = 29.34 ± 4.58 years, body mass = 59.61 ± 4.72 kg, height = 162.06 ± 4.04 cm) performed 4 resistance exercise sessions that involved 3 sets of the free weight bench press, wide grip front lat pull-down, and free weight back squat, performed in this exercise order during all 4 sessions. Each of the 4 sessions was conducted under different randomly ordered loading schemes, including (a) a constant 10 repetition maximum (RM) load for all 3 sets and for all 3 exercises, (b) a 5% reduction after the first and second sets for all the 3 exercises, (c) a 10% reduction after the first and second sets for all the 3 exercises, and (d) a 15% reduction after the first and second sets for all the 3 exercises. The results indicated that for the wide grip front lat pull-down and free weight back squat, a 10% load reduction was necessary after the first and second sets to accomplish 10 repetitions on all the 3 sets. For the free weight bench press, a load reduction between 10 and 15% was necessary; specifically, a 10% reduction was insufficient and a 15% reduction was excessive, as evidenced by significantly >10 repetitions on the second and third sets for this exercise (p ≤ 0.05). In conclusion, the results of this study indicate that a resistance training prescription that involves 1-minute rest intervals between multiple 10RM sets does require load reductions to maintain repetition performance. Practitioners might apply these results by considering an approximate 10% load reduction after the first and second sets for the exercises examined, when training women of similar characteristics as in this study.

  7. Generation of skeletal mechanism by means of projected entropy participation indices

    NASA Astrophysics Data System (ADS)

    Paolucci, Samuel; Valorani, Mauro; Ciottoli, Pietro Paolo; Galassi, Riccardo Malpica

    2017-11-01

    When the dynamics of reactive systems develop very-slow and very-fast time scales separated by a range of active time scales, with gaps in the fast/active and slow/active time scales, then it is possible to achieve multi-scale adaptive model reduction along-with the integration of the ODEs using the G-Scheme. The scheme assumes that the dynamics is decomposed into active, slow, fast, and invariant subspaces. We derive expressions that establish a direct link between time scales and entropy production by using estimates provided by the G-Scheme. To calculate the contribution to entropy production, we resort to a standard model of a constant pressure, adiabatic, batch reactor, where the mixture temperature of the reactants is initially set above the auto-ignition temperature. Numerical experiments show that the contribution to entropy production of the fast subspace is of the same magnitude as the error threshold chosen for the identification of the decomposition of the tangent space, and the contribution of the slow subspace is generally much smaller than that of the active subspace. The information on entropy production associated with reactions within each subspace is used to define an entropy participation index that is subsequently utilized for model reduction.

  8. An Energy-Aware Hybrid ARQ Scheme with Multi-ACKs for Data Sensing Wireless Sensor Networks.

    PubMed

    Zhang, Jinhuan; Long, Jun

    2017-06-12

    Wireless sensor networks (WSNs) are one of the important supporting technologies of edge computing. In WSNs, reliable communications are essential for most applications due to the unreliability of wireless links. In addition, network lifetime is also an important performance metric and needs to be considered in many WSN studies. In the paper, an energy-aware hybrid Automatic Repeat-reQuest protocol (ARQ) scheme is proposed to ensure energy efficiency under the guarantee of network transmission reliability. In the scheme, the source node sends data packets continuously with the correct window size and it does not need to wait for the acknowledgement (ACK) confirmation for each data packet. When the destination receives K data packets, it will return multiple copies of one ACK for confirmation to avoid ACK packet loss. The energy consumption of each node in flat circle network applying the proposed scheme is statistical analyzed and the cases under which it is more energy efficiency than the original scheme is discussed. Moreover, how to select parameters of the scheme is addressed to extend the network lifetime under the constraint of the network reliability. In addition, the energy efficiency of the proposed schemes is evaluated. Simulation results are presented to demonstrate that a node energy consumption reduction could be gained and the network lifetime is prolonged.

  9. A novel forward projection-based metal artifact reduction method for flat-detector computed tomography.

    PubMed

    Prell, Daniel; Kyriakou, Yiannis; Beister, Marcel; Kalender, Willi A

    2009-11-07

    Metallic implants generate streak-like artifacts in flat-detector computed tomography (FD-CT) reconstructed volumetric images. This study presents a novel method for reducing these disturbing artifacts by inserting discarded information into the original rawdata using a three-step correction procedure and working directly with each detector element. Computation times are minimized by completely implementing the correction process on graphics processing units (GPUs). First, the original volume is corrected using a three-dimensional interpolation scheme in the rawdata domain, followed by a second reconstruction. This metal artifact-reduced volume is then segmented into three materials, i.e. air, soft-tissue and bone, using a threshold-based algorithm. Subsequently, a forward projection of the obtained tissue-class model substitutes the missing or corrupted attenuation values directly for each flat detector element that contains attenuation values corresponding to metal parts, followed by a final reconstruction. Experiments using tissue-equivalent phantoms showed a significant reduction of metal artifacts (deviations of CT values after correction compared to measurements without metallic inserts reduced typically to below 20 HU, differences in image noise to below 5 HU) caused by the implants and no significant resolution losses even in areas close to the inserts. To cover a variety of different cases, cadaver measurements and clinical images in the knee, head and spine region were used to investigate the effectiveness and applicability of our method. A comparison to a three-dimensional interpolation correction showed that the new approach outperformed interpolation schemes. Correction times are minimized, and initial and corrected images are made available at almost the same time (12.7 s for the initial reconstruction, 46.2 s for the final corrected image compared to 114.1 s and 355.1 s on central processing units (CPUs)).

  10. Color encryption scheme based on adapted quantum logistic map

    NASA Astrophysics Data System (ADS)

    Zaghloul, Alaa; Zhang, Tiejun; Amin, Mohamed; Abd El-Latif, Ahmed A.

    2014-04-01

    This paper presents a new color image encryption scheme based on quantum chaotic system. In this scheme, a new encryption scheme is accomplished by generating an intermediate chaotic key stream with the help of quantum chaotic logistic map. Then, each pixel is encrypted by the cipher value of the previous pixel and the adapted quantum logistic map. The results show that the proposed scheme has adequate security for the confidentiality of color images.

  11. Surface profile control of FeNiPt/Pt core/shell nanowires for oxygen reduction reaction

    DOE PAGES

    Zhu, Huiyuan; Zhang, Sen; Su, Dong; ...

    2015-03-18

    The ever-increasing energy demand requires renewable energy schemes with low environmental impacts. Electrochemical energy conversion devices, such as fuel cells, combine fuel oxidization and oxygen reduction reactions and have been studied extensively for renewable energy applications. However, their energy conversion efficiency is often limited by kinetically sluggish chemical conversion reactions, especially oxygen reduction reaction (ORR). [1-5] To date, extensive efforts have been put into developing efficient ORR catalysts with controls on catalyst sizes, compositions, shapes and structures. [6-12] Recently, Pt-based catalysts with core/shell and one-dimensional nanowire (NW) morphologies were found to be promising to further enhance ORR catalysis.more » With the core/shell structure, the ORR catalysis of a nanoparticle (NP) catalyst can be tuned by both electronic and geometric effects at the core/shell interface. [10,13,14] With the NW structure, the catalyst interaction with the conductive support can be enhanced to facilitate electron transfer between the support and the NW catalyst and to promote ORR. [11,15,16]« less

  12. A Hash Based Remote User Authentication and Authenticated Key Agreement Scheme for the Integrated EPR Information System.

    PubMed

    Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi; Wang, Chun-Cheng

    2015-11-01

    To protect patient privacy and ensure authorized access to remote medical services, many remote user authentication schemes for the integrated electronic patient record (EPR) information system have been proposed in the literature. In a recent paper, Das proposed a hash based remote user authentication scheme using passwords and smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various passive and active attacks. However, in this paper, we found that Das's authentication scheme is still vulnerable to modification and user duplication attacks. Thereafter we propose a secure and efficient authentication scheme for the integrated EPR information system based on lightweight hash function and bitwise exclusive-or (XOR) operations. The security proof and performance analysis show our new scheme is well-suited to adoption in remote medical healthcare services.

  13. A Practical and Secure Coercion-Resistant Scheme for Internet Voting

    NASA Astrophysics Data System (ADS)

    Araújo, Roberto; Foulle, Sébastien; Traoré, Jacques

    Juels, Catalano, and Jakobsson (JCJ) proposed at WPES 2005 the first voting scheme that considers real-world threats and that is more realistic for Internet elections. Their scheme, though, has a quadratic work factor and thereby is not efficient for large scale elections. Based on the work of JCJ, Smith proposed an efficient scheme that has a linear work factor. In this paper we first show that Smith's scheme is insecure. Then we present a new coercion-resistant election scheme with a linear work factor that overcomes the flaw of Smith's proposal. Our solution is based on the group signature scheme of Camenisch and Lysyanskaya (Crypto 2004).

  14. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMES IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1) Lake Superior tributaries and 2) watersheds of riverine coastal wetlands...

  15. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMED IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1)Lake Superior tributaries and 2) watersheds of riverine coastal wetlands ...

  16. Role of a gas phase in the kinetics of zinc and iron reduction with carbon from slag melts

    NASA Astrophysics Data System (ADS)

    Chumarev, V. M.; Selivanov, E. N.

    2013-03-01

    The influence of the mass transfer conditions in the gas phase having formed at the carbon-slag melt interface on CO regeneration is approximately estimated in the framework of a two-stage scheme of metal reduction from slag melts by carbon. The effect of zinc vapors on the combined reduction of iron and zinc from slags is considered. The influence of the slag composition and temperature on the critical concentration of zinc oxide above which no iron forms as an individual phase is explained.

  17. An optimal implicit staggered-grid finite-difference scheme based on the modified Taylor-series expansion with minimax approximation method for elastic modeling

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2017-03-01

    Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.

  18. Intercomparison of Large-Eddy Simulations of Arctic Mixed-Phase Clouds: Importance of Ice Size Distribution Assumptions

    NASA Technical Reports Server (NTRS)

    Ovchinnikov, Mikhail; Ackerman, Andrew S.; Avramov, Alexander; Cheng, Anning; Fan, Jiwen; Fridlind, Ann M.; Ghan, Steven; Harrington, Jerry; Hoose, Corinna; Korolev, Alexei; hide

    2014-01-01

    Large-eddy simulations of mixed-phase Arctic clouds by 11 different models are analyzed with the goal of improving understanding and model representation of processes controlling the evolution of these clouds. In a case based on observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC), it is found that ice number concentration, Ni, exerts significant influence on the cloud structure. Increasing Ni leads to a substantial reduction in liquid water path (LWP), in agreement with earlier studies. In contrast to previous intercomparison studies, all models here use the same ice particle properties (i.e., mass-size, mass-fall speed, and mass-capacitance relationships) and a common radiation parameterization. The constrained setup exposes the importance of ice particle size distributions (PSDs) in influencing cloud evolution. A clear separation in LWP and IWP predicted by models with bin and bulk microphysical treatments is documented and attributed primarily to the assumed shape of ice PSD used in bulk schemes. Compared to the bin schemes that explicitly predict the PSD, schemes assuming exponential ice PSD underestimate ice growth by vapor deposition and overestimate mass-weighted fall speed leading to an underprediction of IWP by a factor of two in the considered case. Sensitivity tests indicate LWP and IWP are much closer to the bin model simulations when a modified shape factor which is similar to that predicted by bin model simulation is used in bulk scheme. These results demonstrate the importance of representation of ice PSD in determining the partitioning of liquid and ice and the longevity of mixed-phase clouds.

  19. New features to the night sky radiance model illumina: Hyperspectral support, improved obstacles and cloud reflection

    NASA Astrophysics Data System (ADS)

    Aubé, M.; Simoneau, A.

    2018-05-01

    Illumina is one of the most physically detailed artificial night sky brightness model to date. It has been in continuous development since 2005 [1]. In 2016-17, many improvements were made to the Illumina code including an overhead cloud scheme, an improved blocking scheme for subgrid obstacles (trees and buildings), and most importantly, a full hyperspectral modeling approach. Code optimization resulted in significant reduction in execution time enabling users to run the model on standard personal computers for some applications. After describing the new schemes introduced in the model, we give some examples of applications for a peri-urban and a rural site both located inside the International Dark Sky reserve of Mont-Mégantic (QC, Canada).

  20. Simulations of the failure scenarios of the crab cavities for the nominal scheme of the LHC

    NASA Astrophysics Data System (ADS)

    Yee, B.; Calaga, R.; Zimmermann, F.; Lopez, R.

    2012-02-01

    The Crab Cavity (CC) represents a possible solution to the problem of the reduction in luminosity due to the impact angle of two colliding beams. The CC is a Radio Frequency (RF) superconducting cavity which applies a transversal kick into a bunch of particles producing a rotation in order to have a head-on collision to improve the luminosity. For this reason people at the Beams Department-Accelerators & Beams Physics of CERN (BE-ABP) have studied the implementation of the CC scheme at the LHC. It is essential to study the failure scenarios and the damage that can be produced to the lattice devices. We have performed simulations of these failures for the nominal scheme.

  1. Simultaneous multislice diffusion-weighted MRI of the liver: Analysis of different breathing schemes in comparison to standard sequences.

    PubMed

    Taron, Jana; Martirosian, Petros; Erb, Michael; Kuestner, Thomas; Schwenzer, Nina F; Schmidt, Holger; Honndorf, Valerie S; Weiβ, Jakob; Notohamiprodjo, Mike; Nikolaou, Konstantin; Schraml, Christina

    2016-10-01

    To systematically evaluate image characteristics of simultaneous-multislice (SMS)-accelerated diffusion-weighted imaging (DWI) of the liver using different breathing schemes in comparison to standard sequences. DWI of the liver was performed in 10 healthy volunteers and 12 patients at 1.5T using an SMS-accelerated echo planar imaging sequence performed with respiratory-triggering and free breathing (SMS-RT, SMS-FB). Standard DWI sequences served as reference (STD-RT, STD-FB). Reduction of scan time by SMS-acceleration was measured. Image characteristics of SMS-DWI and STD-DWI with both breathing schemes were analyzed quantitatively (apparent diffusion coefficient [ADC], signal-to-noise ratio [SNR]) and qualitatively (5-point Likert scale, 5 = excellent). Qualitative and quantitative parameters were compared using Friedman test and Dunn-Bonferroni post-hoc method with P-values < 0.05 considered statistically significant. SMS-DWI provided diagnostic image quality in volunteers and patients both with RT and FB with a reduction of scan time of 70% (0:56 vs. 3:20 min in FB). Overall image quality did not significantly differ between FB and RT acquisition in both STD and SMS sequences (median STD-RT 5.0, STD-FB 4.5, SMS-RT: 4.75; SMS-FB: 4.5; P = 0.294). SNR in the right hepatic lobe was comparable between the four tested sequences. ADC values were significantly lower in SMS-DWI compared to STD-DWI irrespective of the breathing scheme (1.2 ± 0.2 × 10(-3) mm(2) /s vs. 1.0 ± 0.2 × 10(-3) mm(2) /s; P < 0.001). SMS-acceleration provides considerable scan time reduction for hepatic DWI with equivalent image quality compared to the STD technique both using RT and FB. Discrepancies in ADC between STD-DWI and SMS-DWI need to be considered when transferring the SMS technique to clinical routine reading. J. MAGN. RESON. IMAGING 2016;44:865-879. © 2016 International Society for Magnetic Resonance in Medicine.

  2. Multiadaptive Bionic Wavelet Transform: Application to ECG Denoising and Baseline Wandering Reduction

    NASA Astrophysics Data System (ADS)

    Sayadi, Omid; Shamsollahi, Mohammad B.

    2007-12-01

    We present a new modified wavelet transform, called the multiadaptive bionic wavelet transform (MABWT), that can be applied to ECG signals in order to remove noise from them under a wide range of variations for noise. By using the definition of bionic wavelet transform and adaptively determining both the center frequency of each scale together with the[InlineEquation not available: see fulltext.]-function, the problem of desired signal decomposition is solved. Applying a new proposed thresholding rule works successfully in denoising the ECG. Moreover by using the multiadaptation scheme, lowpass noisy interference effects on the baseline of ECG will be removed as a direct task. The method was extensively clinically tested with real and simulated ECG signals which showed high performance of noise reduction, comparable to those of wavelet transform (WT). Quantitative evaluation of the proposed algorithm shows that the average SNR improvement of MABWT is 1.82 dB more than the WT-based results, for the best case. Also the procedure has largely proved advantageous over wavelet-based methods for baseline wandering cancellation, including both DC components and baseline drifts.

  3. Assessment of Closed-Loop Control Using Multi-Mode Sensor Fusion For a High Reynolds Number Transonic Jet

    NASA Astrophysics Data System (ADS)

    Low, Kerwin; Elhadidi, Basman; Glauser, Mark

    2009-11-01

    Understanding the different noise production mechanisms caused by the free shear flows in a turbulent jet flow provides insight to improve ``intelligent'' feedback mechanisms to control the noise. Towards this effort, a control scheme is based on feedback of azimuthal pressure measurements in the near field of the jet at two streamwise locations. Previous studies suggested that noise reduction can be achieved by azimuthal actuators perturbing the shear layer at the jet lip. The closed-loop actuation will be based on a low-dimensional Fourier representation of the hydrodynamic pressure measurements. Preliminary results show that control authority and reduction in the overall sound pressure level was possible. These results provide motivation to move forward with the overall vision of developing innovative multi-mode sensing methods to improve state estimation and derive dynamical systems. It is envisioned that estimating velocity-field and dynamic pressure information from various locations both local and in the far-field regions, sensor fusion techniques can be utilized to ascertain greater overall control authority.

  4. An effective and secure key-management scheme for hierarchical access control in E-medicine system.

    PubMed

    Odelu, Vanga; Das, Ashok Kumar; Goswami, Adrijit

    2013-04-01

    Recently several hierarchical access control schemes are proposed in the literature to provide security of e-medicine systems. However, most of them are either insecure against 'man-in-the-middle attack' or they require high storage and computational overheads. Wu and Chen proposed a key management method to solve dynamic access control problems in a user hierarchy based on hybrid cryptosystem. Though their scheme improves computational efficiency over Nikooghadam et al.'s approach, it suffers from large storage space for public parameters in public domain and computational inefficiency due to costly elliptic curve point multiplication. Recently, Nikooghadam and Zakerolhosseini showed that Wu-Chen's scheme is vulnerable to man-in-the-middle attack. In order to remedy this security weakness in Wu-Chen's scheme, they proposed a secure scheme which is again based on ECC (elliptic curve cryptography) and efficient one-way hash function. However, their scheme incurs huge computational cost for providing verification of public information in the public domain as their scheme uses ECC digital signature which is costly when compared to symmetric-key cryptosystem. In this paper, we propose an effective access control scheme in user hierarchy which is only based on symmetric-key cryptosystem and efficient one-way hash function. We show that our scheme reduces significantly the storage space for both public and private domains, and computational complexity when compared to Wu-Chen's scheme, Nikooghadam-Zakerolhosseini's scheme, and other related schemes. Through the informal and formal security analysis, we further show that our scheme is secure against different attacks and also man-in-the-middle attack. Moreover, dynamic access control problems in our scheme are also solved efficiently compared to other related schemes, making our scheme is much suitable for practical applications of e-medicine systems.

  5. High-Order Central WENO Schemes for Multi-Dimensional Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We present new third- and fifth-order Godunov-type central schemes for approximating solutions of the Hamilton-Jacobi (HJ) equation in an arbitrary number of space dimensions. These are the first central schemes for approximating solutions of the HJ equations with an order of accuracy that is greater than two. In two space dimensions we present two versions for the third-order scheme: one scheme that is based on a genuinely two-dimensional Central WENO reconstruction, and another scheme that is based on a simpler dimension-by-dimension reconstruction. The simpler dimension-by-dimension variant is then extended to a multi-dimensional fifth-order scheme. Our numerical examples in one, two and three space dimensions verify the expected order of accuracy of the schemes.

  6. A data colocation grid framework for big data medical image processing: backend design

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-03-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.

  7. New adaptive color quantization method based on self-organizing maps.

    PubMed

    Chang, Chip-Hong; Xu, Pengfei; Xiao, Rui; Srikanthan, Thambipillai

    2005-01-01

    Color quantization (CQ) is an image processing task popularly used to convert true color images to palletized images for limited color display devices. To minimize the contouring artifacts introduced by the reduction of colors, a new competitive learning (CL) based scheme called the frequency sensitive self-organizing maps (FS-SOMs) is proposed to optimize the color palette design for CQ. FS-SOM harmonically blends the neighborhood adaptation of the well-known self-organizing maps (SOMs) with the neuron dependent frequency sensitive learning model, the global butterfly permutation sequence for input randomization, and the reinitialization of dead neurons to harness effective utilization of neurons. The net effect is an improvement in adaptation, a well-ordered color palette, and the alleviation of underutilization problem, which is the main cause of visually perceivable artifacts of CQ. Extensive simulations have been performed to analyze and compare the learning behavior and performance of FS-SOM against other vector quantization (VQ) algorithms. The results show that the proposed FS-SOM outperforms classical CL, Linde, Buzo, and Gray (LBG), and SOM algorithms. More importantly, FS-SOM achieves its superiority in reconstruction quality and topological ordering with a much greater robustness against variations in network parameters than the current art SOM algorithm for CQ. A most significant bit (MSB) biased encoding scheme is also introduced to reduce the number of parallel processing units. By mapping the pixel values as sign-magnitude numbers and biasing the magnitudes according to their sign bits, eight lattice points in the color space are condensed into one common point density function. Consequently, the same processing element can be used to map several color clusters and the entire FS-SOM network can be substantially scaled down without severely scarifying the quality of the displayed image. The drawback of this encoding scheme is the additional storage overhead, which can be cut down by leveraging on existing encoder in an overall lossy compression scheme.

  8. A supervised 'lesion-enhancement' filter by use of a massive-training artificial neural network (MTANN) in computer-aided diagnosis (CAD).

    PubMed

    Suzuki, Kenji

    2009-09-21

    Computer-aided diagnosis (CAD) has been an active area of study in medical image analysis. A filter for the enhancement of lesions plays an important role for improving the sensitivity and specificity in CAD schemes. The filter enhances objects similar to a model employed in the filter; e.g. a blob-enhancement filter based on the Hessian matrix enhances sphere-like objects. Actual lesions, however, often differ from a simple model; e.g. a lung nodule is generally modeled as a solid sphere, but there are nodules of various shapes and with internal inhomogeneities such as a nodule with spiculations and ground-glass opacity. Thus, conventional filters often fail to enhance actual lesions. Our purpose in this study was to develop a supervised filter for the enhancement of actual lesions (as opposed to a lesion model) by use of a massive-training artificial neural network (MTANN) in a CAD scheme for detection of lung nodules in CT. The MTANN filter was trained with actual nodules in CT images to enhance actual patterns of nodules. By use of the MTANN filter, the sensitivity and specificity of our CAD scheme were improved substantially. With a database of 69 lung cancers, nodule candidate detection by the MTANN filter achieved a 97% sensitivity with 6.7 false positives (FPs) per section, whereas nodule candidate detection by a difference-image technique achieved a 96% sensitivity with 19.3 FPs per section. Classification-MTANNs were applied for further reduction of the FPs. The classification-MTANNs removed 60% of the FPs with a loss of one true positive; thus, it achieved a 96% sensitivity with 2.7 FPs per section. Overall, with our CAD scheme based on the MTANN filter and classification-MTANNs, an 84% sensitivity with 0.5 FPs per section was achieved.

  9. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.

    PubMed

    Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A

    2018-03-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.

  10. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design

    PubMed Central

    Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-01-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668

  11. Viscous flow drag reduction; Symposium, Dallas, Tex., November 7, 8, 1979, Technical Papers

    NASA Technical Reports Server (NTRS)

    Hough, G. R.

    1980-01-01

    The symposium focused on laminar boundary layers, boundary layer stability analysis of a natural laminar flow glove on the F-111 TACT aircraft, drag reduction of an oscillating flat plate with an interface film, electromagnetic precipitation and ducting of particles in turbulent boundary layers, large eddy breakup scheme for turbulent viscous drag reduction, blowing and suction, polymer additives, and compliant surfaces. Topics included influence of environment in laminar boundary layer control, generation rate of turbulent patches in the laminar boundary layer of a submersible, drag reduction of small amplitude rigid surface waves, and hydrodynamic drag and surface deformations generated by liquid flows over flexible surfaces.

  12. A Quantum Proxy Signature Scheme Based on Genuine Five-qubit Entangled State

    NASA Astrophysics Data System (ADS)

    Cao, Hai-Jing; Huang, Jun; Yu, Yao-Feng; Jiang, Xiu-Li

    2014-09-01

    In this paper a very efficient and secure proxy signature scheme is proposed. It is based on controlled quantum teleportation. Genuine five-qubit entangled state functions as quantum channel. The scheme uses the physical characteristics of quantum mechanics to implement delegation, signature and verification. Quantum key distribution and one-time pad are adopted in our scheme, which could guarantee not only the unconditional security of the scheme but also the anonymity of the messages owner.

  13. An improved and effective secure password-based authentication and key agreement scheme using smart cards for the telecare medicine information system.

    PubMed

    Das, Ashok Kumar; Bruhadeshwar, Bezawada

    2013-10-01

    Recently Lee and Liu proposed an efficient password based authentication and key agreement scheme using smart card for the telecare medicine information system [J. Med. Syst. (2013) 37:9933]. In this paper, we show that though their scheme is efficient, their scheme still has two security weaknesses such as (1) it has design flaws in authentication phase and (2) it has design flaws in password change phase. In order to withstand these flaws found in Lee-Liu's scheme, we propose an improvement of their scheme. Our improved scheme keeps also the original merits of Lee-Liu's scheme. We show that our scheme is efficient as compared to Lee-Liu's scheme. Further, through the security analysis, we show that our scheme is secure against possible known attacks. In addition, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that our scheme is secure against passive and active attacks.

  14. Improving Biometric-Based Authentication Schemes with Smart Card Revocation/Reissue for Wireless Sensor Networks.

    PubMed

    Moon, Jongho; Lee, Donghoon; Lee, Youngsook; Won, Dongho

    2017-04-25

    User authentication in wireless sensor networks is more difficult than in traditional networks owing to sensor network characteristics such as unreliable communication, limited resources, and unattended operation. For these reasons, various authentication schemes have been proposed to provide secure and efficient communication. In 2016, Park et al. proposed a secure biometric-based authentication scheme with smart card revocation/reissue for wireless sensor networks. However, we found that their scheme was still insecure against impersonation attack, and had a problem in the smart card revocation/reissue phase. In this paper, we show how an adversary can impersonate a legitimate user or sensor node, illegal smart card revocation/reissue and prove that Park et al.'s scheme fails to provide revocation/reissue. In addition, we propose an enhanced scheme that provides efficiency, as well as anonymity and security. Finally, we provide security and performance analysis between previous schemes and the proposed scheme, and provide formal analysis based on the random oracle model. The results prove that the proposed scheme can solve the weaknesses of impersonation attack and other security flaws in the security analysis section. Furthermore, performance analysis shows that the computational cost is lower than the previous scheme.

  15. Improving Biometric-Based Authentication Schemes with Smart Card Revocation/Reissue for Wireless Sensor Networks

    PubMed Central

    Moon, Jongho; Lee, Donghoon; Lee, Youngsook; Won, Dongho

    2017-01-01

    User authentication in wireless sensor networks is more difficult than in traditional networks owing to sensor network characteristics such as unreliable communication, limited resources, and unattended operation. For these reasons, various authentication schemes have been proposed to provide secure and efficient communication. In 2016, Park et al. proposed a secure biometric-based authentication scheme with smart card revocation/reissue for wireless sensor networks. However, we found that their scheme was still insecure against impersonation attack, and had a problem in the smart card revocation/reissue phase. In this paper, we show how an adversary can impersonate a legitimate user or sensor node, illegal smart card revocation/reissue and prove that Park et al.’s scheme fails to provide revocation/reissue. In addition, we propose an enhanced scheme that provides efficiency, as well as anonymity and security. Finally, we provide security and performance analysis between previous schemes and the proposed scheme, and provide formal analysis based on the random oracle model. The results prove that the proposed scheme can solve the weaknesses of impersonation attack and other security flaws in the security analysis section. Furthermore, performance analysis shows that the computational cost is lower than the previous scheme. PMID:28441331

  16. An improved scheme for Flip-OFDM based on Hartley transform in short-range IM/DD systems.

    PubMed

    Zhou, Ji; Qiao, Yaojun; Cai, Zhuo; Ji, Yuefeng

    2014-08-25

    In this paper, an improved Flip-OFDM scheme is proposed for IM/DD optical systems, where the modulation/demodulation processing takes advantage of the fast Hartley transform (FHT) algorithm. We realize the improved scheme in one symbol period while conventional Flip-OFDM scheme based on fast Fourier transform (FFT) in two consecutive symbol periods. So the complexity of many operations in improved scheme is half of that in conventional scheme, such as CP operation, polarity inversion and symbol delay. Compared to FFT with complex input constellation, the complexity of FHT with real input constellation is halved. The transmission experiment over 50-km SSMF has been realized to verify the feasibility of improved scheme. In conclusion, the improved scheme has the same BER performance with conventional scheme, but great superiority on complexity.

  17. Model predictive control based on reduced order models applied to belt conveyor system.

    PubMed

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Estimation of Cyclic Shift with Delayed Correlation and Matched Filtering in Time Domain Cyclic-SLM for PAPR Reduction

    PubMed Central

    2016-01-01

    Time domain cyclic-selective mapping (TDC-SLM) reduces the peak-to-average power ratio (PAPR) in OFDM systems while the amounts of cyclic shifts are required to recover the transmitted signal in a receiver. One of the critical issues of the SLM scheme is sending the side information (SI) which reduces the throughputs in wireless OFDM systems. The proposed scheme implements delayed correlation and matched filtering (DC-MF) to estimate the amounts of the cyclic shifts in the receiver. In the proposed scheme, the DC-MF is placed after the frequency domain equalization (FDE) to improve the accuracy of cyclic shift estimation. The accuracy rate of the propose scheme reaches 100% at E b/N 0 = 5 dB and the bit error rate (BER) improves by 0.2 dB as compared with the conventional TDC-SLM. The BER performance of the proposed scheme is also better than that of the conventional TDC-SLM even though a nonlinear high power amplifier is assumed. PMID:27752539

  19. Chain-Based Communication in Cylindrical Underwater Wireless Sensor Networks

    PubMed Central

    Javaid, Nadeem; Jafri, Mohsin Raza; Khan, Zahoor Ali; Alrajeh, Nabil; Imran, Muhammad; Vasilakos, Athanasios

    2015-01-01

    Appropriate network design is very significant for Underwater Wireless Sensor Networks (UWSNs). Application-oriented UWSNs are planned to achieve certain objectives. Therefore, there is always a demand for efficient data routing schemes, which can fulfill certain requirements of application-oriented UWSNs. These networks can be of any shape, i.e., rectangular, cylindrical or square. In this paper, we propose chain-based routing schemes for application-oriented cylindrical networks and also formulate mathematical models to find a global optimum path for data transmission. In the first scheme, we devise four interconnected chains of sensor nodes to perform data communication. In the second scheme, we propose routing scheme in which two chains of sensor nodes are interconnected, whereas in third scheme single-chain based routing is done in cylindrical networks. After finding local optimum paths in separate chains, we find global optimum paths through their interconnection. Moreover, we develop a computational model for the analysis of end-to-end delay. We compare the performance of the above three proposed schemes with that of Power Efficient Gathering System in Sensor Information Systems (PEGASIS) and Congestion adjusted PEGASIS (C-PEGASIS). Simulation results show that our proposed 4-chain based scheme performs better than the other selected schemes in terms of network lifetime, end-to-end delay, path loss, transmission loss, and packet sending rate. PMID:25658394

  20. Design of a practical model-observer-based image quality assessment method for CT imaging systems

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-Wu; Fan, Jiahua; Cao, Guangzhi; Kupinski, Matthew A.; Sainath, Paavana

    2014-03-01

    The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.

  1. Public-key quantum digital signature scheme with one-time pad private-key

    NASA Astrophysics Data System (ADS)

    Chen, Feng-Lin; Liu, Wan-Fang; Chen, Su-Gen; Wang, Zhi-Hua

    2018-01-01

    A quantum digital signature scheme is firstly proposed based on public-key quantum cryptosystem. In the scheme, the verification public-key is derived from the signer's identity information (such as e-mail) on the foundation of identity-based encryption, and the signature private-key is generated by one-time pad (OTP) protocol. The public-key and private-key pair belongs to classical bits, but the signature cipher belongs to quantum qubits. After the signer announces the public-key and generates the final quantum signature, each verifier can verify publicly whether the signature is valid or not with the public-key and quantum digital digest. Analysis results show that the proposed scheme satisfies non-repudiation and unforgeability. Information-theoretic security of the scheme is ensured by quantum indistinguishability mechanics and OTP protocol. Based on the public-key cryptosystem, the proposed scheme is easier to be realized compared with other quantum signature schemes under current technical conditions.

  2. Gas-Kinetic Theory Based Flux Splitting Method for Ideal Magnetohydrodynamics

    NASA Technical Reports Server (NTRS)

    Xu, Kun

    1998-01-01

    A gas-kinetic solver is developed for the ideal magnetohydrodynamics (MHD) equations. The new scheme is based on the direct splitting of the flux function of the MHD equations with the inclusion of "particle" collisions in the transport process. Consequently, the artificial dissipation in the new scheme is much reduced in comparison with the MHD Flux Vector Splitting Scheme. At the same time, the new scheme is compared with the well-developed Roe-type MHD solver. It is concluded that the kinetic MHD scheme is more robust and efficient than the Roe- type method, and the accuracy is competitive. In this paper the general principle of splitting the macroscopic flux function based on the gas-kinetic theory is presented. The flux construction strategy may shed some light on the possible modification of AUSM- and CUSP-type schemes for the compressible Euler equations, as well as to the development of new schemes for a non-strictly hyperbolic system.

  3. Universal block diagram based modeling and simulation schemes for fractional-order control systems.

    PubMed

    Bai, Lu; Xue, Dingyü

    2017-05-08

    Universal block diagram based schemes are proposed for modeling and simulating the fractional-order control systems in this paper. A fractional operator block in Simulink is designed to evaluate the fractional-order derivative and integral. Based on the block, the fractional-order control systems with zero initial conditions can be modeled conveniently. For modeling the system with nonzero initial conditions, the auxiliary signal is constructed in the compensation scheme. Since the compensation scheme is very complicated, therefore the integrator chain scheme is further proposed to simplify the modeling procedures. The accuracy and effectiveness of the schemes are assessed in the examples, the computation results testify the block diagram scheme is efficient for all Caputo fractional-order ordinary differential equations (FODEs) of any complexity, including the implicit Caputo FODEs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Secure Communications in CIoT Networks with a Wireless Energy Harvesting Untrusted Relay

    PubMed Central

    Hu, Hequn; Liao, Xuewen

    2017-01-01

    The Internet of Things (IoT) represents a bright prospect that a variety of common appliances can connect to one another, as well as with the rest of the Internet, to vastly improve our lives. Unique communication and security challenges have been brought out by the limited hardware, low-complexity, and severe energy constraints of IoT devices. In addition, a severe spectrum scarcity problem has also been stimulated by the use of a large number of IoT devices. In this paper, cognitive IoT (CIoT) is considered where an IoT network works as the secondary system using underlay spectrum sharing. A wireless energy harvesting (EH) node is used as a relay to improve the coverage of an IoT device. However, the relay could be a potential eavesdropper to intercept the IoT device’s messages. This paper considers the problem of secure communication between the IoT device (e.g., sensor) and a destination (e.g., controller) via the wireless EH untrusted relay. Since the destination can be equipped with adequate energy supply, secure schemes based on destination-aided jamming are proposed based on power splitting (PS) and time splitting (TS) policies, called intuitive secure schemes based on PS (Int-PS), precoded secure scheme based on PS (Pre-PS), intuitive secure scheme based on TS (Int-TS) and precoded secure scheme based on TS (Pre-TS), respectively. The secure performances of the proposed schemes are evaluated through the metric of probability of successfully secure transmission (PSST), which represents the probability that the interference constraint of the primary user is satisfied and the secrecy rate is positive. PSST is analyzed for the proposed secure schemes, and the closed form expressions of PSST for Pre-PS and Pre-TS are derived and validated through simulation results. Numerical results show that the precoded secure schemes have better PSST than the intuitive secure schemes under similar power consumption. When the secure schemes based on PS and TS polices have similar PSST, the average transmit power consumption of the secure scheme based on TS is lower. The influences of power splitting and time slitting ratios are also discussed through simulations. PMID:28869540

  5. Efficient and Provable Secure Pairing-Free Security-Mediated Identity-Based Identification Schemes

    PubMed Central

    Chin, Ji-Jian; Tan, Syh-Yuan; Heng, Swee-Huay; Phan, Raphael C.-W.

    2014-01-01

    Security-mediated cryptography was first introduced by Boneh et al. in 2001. The main motivation behind security-mediated cryptography was the capability to allow instant revocation of a user's secret key by necessitating the cooperation of a security mediator in any given transaction. Subsequently in 2003, Boneh et al. showed how to convert a RSA-based security-mediated encryption scheme from a traditional public key setting to an identity-based one, where certificates would no longer be required. Following these two pioneering papers, other cryptographic primitives that utilize a security-mediated approach began to surface. However, the security-mediated identity-based identification scheme (SM-IBI) was not introduced until Chin et al. in 2013 with a scheme built on bilinear pairings. In this paper, we improve on the efficiency results for SM-IBI schemes by proposing two schemes that are pairing-free and are based on well-studied complexity assumptions: the RSA and discrete logarithm assumptions. PMID:25207333

  6. Efficient and provable secure pairing-free security-mediated identity-based identification schemes.

    PubMed

    Chin, Ji-Jian; Tan, Syh-Yuan; Heng, Swee-Huay; Phan, Raphael C-W

    2014-01-01

    Security-mediated cryptography was first introduced by Boneh et al. in 2001. The main motivation behind security-mediated cryptography was the capability to allow instant revocation of a user's secret key by necessitating the cooperation of a security mediator in any given transaction. Subsequently in 2003, Boneh et al. showed how to convert a RSA-based security-mediated encryption scheme from a traditional public key setting to an identity-based one, where certificates would no longer be required. Following these two pioneering papers, other cryptographic primitives that utilize a security-mediated approach began to surface. However, the security-mediated identity-based identification scheme (SM-IBI) was not introduced until Chin et al. in 2013 with a scheme built on bilinear pairings. In this paper, we improve on the efficiency results for SM-IBI schemes by proposing two schemes that are pairing-free and are based on well-studied complexity assumptions: the RSA and discrete logarithm assumptions.

  7. Exploration, Sampling, And Reconstruction of Free Energy Surfaces with Gaussian Process Regression.

    PubMed

    Mones, Letif; Bernstein, Noam; Csányi, Gábor

    2016-10-11

    Practical free energy reconstruction algorithms involve three separate tasks: biasing, measuring some observable, and finally reconstructing the free energy surface from those measurements. In more than one dimension, adaptive schemes make it possible to explore only relatively low lying regions of the landscape by progressively building up the bias toward the negative of the free energy surface so that free energy barriers are eliminated. Most schemes use the final bias as their best estimate of the free energy surface. We show that large gains in computational efficiency, as measured by the reduction of time to solution, can be obtained by separating the bias used for dynamics from the final free energy reconstruction itself. We find that biasing with metadynamics, measuring a free energy gradient estimator, and reconstructing using Gaussian process regression can give an order of magnitude reduction in computational cost.

  8. Privacy Protection for Telecare Medicine Information Systems Using a Chaotic Map-Based Three-Factor Authenticated Key Agreement Scheme.

    PubMed

    Zhang, Liping; Zhu, Shaohui; Tang, Shanyu

    2017-03-01

    Telecare medicine information systems (TMIS) provide flexible and convenient e-health care. However, the medical records transmitted in TMIS are exposed to unsecured public networks, so TMIS are more vulnerable to various types of security threats and attacks. To provide privacy protection for TMIS, a secure and efficient authenticated key agreement scheme is urgently needed to protect the sensitive medical data. Recently, Mishra et al. proposed a biometrics-based authenticated key agreement scheme for TMIS by using hash function and nonce, they claimed that their scheme could eliminate the security weaknesses of Yan et al.'s scheme and provide dynamic identity protection and user anonymity. In this paper, however, we demonstrate that Mishra et al.'s scheme suffers from replay attacks, man-in-the-middle attacks and fails to provide perfect forward secrecy. To overcome the weaknesses of Mishra et al.'s scheme, we then propose a three-factor authenticated key agreement scheme to enable the patient to enjoy the remote healthcare services via TMIS with privacy protection. The chaotic map-based cryptography is employed in the proposed scheme to achieve a delicate balance of security and performance. Security analysis demonstrates that the proposed scheme resists various attacks and provides several attractive security properties. Performance evaluation shows that the proposed scheme increases efficiency in comparison with other related schemes.

  9. PHACK: An Efficient Scheme for Selective Forwarding Attack Detection in WSNs.

    PubMed

    Liu, Anfeng; Dong, Mianxiong; Ota, Kaoru; Long, Jun

    2015-12-09

    In this paper, a Per-Hop Acknowledgement (PHACK)-based scheme is proposed for each packet transmission to detect selective forwarding attacks. In our scheme, the sink and each node along the forwarding path generate an acknowledgement (ACK) message for each received packet to confirm the normal packet transmission. The scheme, in which each ACK is returned to the source node along a different routing path, can significantly increase the resilience against attacks because it prevents an attacker from compromising nodes in the return routing path, which can otherwise interrupt the return of nodes' ACK packets. For this case, the PHACK scheme also has better potential to detect abnormal packet loss and identify suspect nodes as well as better resilience against attacks. Another pivotal issue is the network lifetime of the PHACK scheme, as it generates more acknowledgements than previous ACK-based schemes. We demonstrate that the network lifetime of the PHACK scheme is not lower than that of other ACK-based schemes because the scheme just increases the energy consumption in non-hotspot areas and does not increase the energy consumption in hotspot areas. Moreover, the PHACK scheme greatly simplifies the protocol and is easy to implement. Both theoretical and simulation results are given to demonstrate the effectiveness of the proposed scheme in terms of high detection probability and the ability to identify suspect nodes.

  10. PHACK: An Efficient Scheme for Selective Forwarding Attack Detection in WSNs

    PubMed Central

    Liu, Anfeng; Dong, Mianxiong; Ota, Kaoru; Long, Jun

    2015-01-01

    In this paper, a Per-Hop Acknowledgement (PHACK)-based scheme is proposed for each packet transmission to detect selective forwarding attacks. In our scheme, the sink and each node along the forwarding path generate an acknowledgement (ACK) message for each received packet to confirm the normal packet transmission. The scheme, in which each ACK is returned to the source node along a different routing path, can significantly increase the resilience against attacks because it prevents an attacker from compromising nodes in the return routing path, which can otherwise interrupt the return of nodes’ ACK packets. For this case, the PHACK scheme also has better potential to detect abnormal packet loss and identify suspect nodes as well as better resilience against attacks. Another pivotal issue is the network lifetime of the PHACK scheme, as it generates more acknowledgements than previous ACK-based schemes. We demonstrate that the network lifetime of the PHACK scheme is not lower than that of other ACK-based schemes because the scheme just increases the energy consumption in non-hotspot areas and does not increase the energy consumption in hotspot areas. Moreover, the PHACK scheme greatly simplifies the protocol and is easy to implement. Both theoretical and simulation results are given to demonstrate the effectiveness of the proposed scheme in terms of high detection probability and the ability to identify suspect nodes. PMID:26690178

  11. Secure anonymity-preserving password-based user authentication and session key agreement scheme for telecare medicine information systems.

    PubMed

    Sutrala, Anil Kumar; Das, Ashok Kumar; Odelu, Vanga; Wazid, Mohammad; Kumari, Saru

    2016-10-01

    Information and communication and technology (ICT) has changed the entire paradigm of society. ICT facilitates people to use medical services over the Internet, thereby reducing the travel cost, hospitalization cost and time to a greater extent. Recent advancements in Telecare Medicine Information System (TMIS) facilitate users/patients to access medical services over the Internet by gaining health monitoring facilities at home. Amin and Biswas recently proposed a RSA-based user authentication and session key agreement protocol usable for TMIS, which is an improvement over Giri et al.'s RSA-based user authentication scheme for TMIS. In this paper, we show that though Amin-Biswas's scheme considerably improves the security drawbacks of Giri et al.'s scheme, their scheme has security weaknesses as it suffers from attacks such as privileged insider attack, user impersonation attack, replay attack and also offline password guessing attack. A new RSA-based user authentication scheme for TMIS is proposed, which overcomes the security pitfalls of Amin-Biswas's scheme and also preserves user anonymity property. The careful formal security analysis using the two widely accepted Burrows-Abadi-Needham (BAN) logic and the random oracle models is done. Moreover, the informal security analysis of the scheme is also done. These security analyses show the robustness of our new scheme against the various known attacks as well as attacks found in Amin-Biswas's scheme. The simulation of the proposed scheme using the widely accepted Automated Validation of Internet Security Protocols and Applications (AVISPA) tool is also done. We present a new user authentication and session key agreement scheme for TMIS, which fixes the mentioned security pitfalls found in Amin-Biswas's scheme, and we also show that the proposed scheme provides better security than other existing schemes through the rigorous security analysis and verification tool. Furthermore, we present the formal security verification of our scheme using the widely accepted AVISPA tool. High security and extra functionality features allow our proposed scheme to be applicable for telecare medicine information systems which is used for e-health care medical applications. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Regolith thermal energy storage for lunar nighttime power

    NASA Technical Reports Server (NTRS)

    Tillotson, Brian

    1992-01-01

    A scheme for providing nighttime electric power to a lunar base is described. This scheme stores thermal energy in a pile of regolith. Any such scheme must somehow improve on the poor thermal conductivity of lunar regolith in vacuum. Two previous schemes accomplish this by casting or melting the regolith. The scheme described here wraps the regolith in a gas-tight bag and introduces a light gas to enhance thermal conductivity. This allows the system to be assembled with less energy and equipment than schemes which require melting of regolith. A point design based on the new scheme is presented. Its mass from Earth compares favorably with the mass of a regenerative fuel cell of equal capacity.

  13. A new semiclassical decoupling scheme for electronic transitions in molecular collisions - Application to vibrational-to-electronic energy transfer

    NASA Technical Reports Server (NTRS)

    Lee, H.-W.; Lam, K. S.; Devries, P. L.; George, T. F.

    1980-01-01

    A new semiclassical decoupling scheme (the trajectory-based decoupling scheme) is introduced in a computational study of vibrational-to-electronic energy transfer for a simple model system that simulates collinear atom-diatom collisions. The probability of energy transfer (P) is calculated quasiclassically using the new scheme as well as quantum mechanically as a function of the atomic electronic-energy separation (lambda), with overall good agreement between the two sets of results. Classical mechanics with the new decoupling scheme is found to be capable of predicting resonance behavior whereas an earlier decoupling scheme (the coordinate-based decoupling scheme) failed. Interference effects are not exhibited in P vs lambda results.

  14. Modeling and performance analysis of an improved movement-based location management scheme for packet-switched mobile communication systems.

    PubMed

    Chung, Yun Won; Kwon, Jae Kyun; Park, Suwon

    2014-01-01

    One of the key technologies to support mobility of mobile station (MS) in mobile communication systems is location management which consists of location update and paging. In this paper, an improved movement-based location management scheme with two movement thresholds is proposed, considering bursty data traffic characteristics of packet-switched (PS) services. The analytical modeling for location update and paging signaling loads of the proposed scheme is developed thoroughly and the performance of the proposed scheme is compared with that of the conventional scheme. We show that the proposed scheme outperforms the conventional scheme in terms of total signaling load with an appropriate selection of movement thresholds.

  15. Widely tunable opto-electronic oscillator

    NASA Astrophysics Data System (ADS)

    Maxin, J.; Pillet, G.; Morvan, L.; Dolfi, D.

    2012-03-01

    We present here a widely tunable opto-electronic oscillator (OEO) based on an Er,Yb:glass Dual Frequency Laser (DFL) at 1.53 μm. The beatnote is stabilized with an optical fiber delay line. Compared to classical optoelectronic oscillators, this architecture does not need RF filter and offers a wide tunability. We measured a reduction of 67 dB of the phase noise power spectral density (PSD) at 10 Hz of the carrier optical fiber leading to a level of -27 dBc/Hz with only 100 m optical fiber. Moreover, the scheme offers a microwave signal tunability from 2.5 to 5.5 GHz limited by the RF components.

  16. Diagnosis diagrams for passing signals on an automatic block signaling railway section

    NASA Astrophysics Data System (ADS)

    Spunei, E.; Piroi, I.; Chioncel, C. P.; Piroi, F.

    2018-01-01

    This work presents a diagnosis method for railway traffic security installations. More specifically, the authors present a series of diagnosis charts for passing signals on a railway block equipped with an automatic block signaling installation. These charts are based on the exploitation electric schemes, and are subsequently used to develop a diagnosis software package. The thus developed software package contributes substantially to a reduction of failure detection and remedy for these types of installation faults. The use of the software package eliminates making wrong decisions in the fault detection process, decisions that may result in longer remedy times and, sometimes, to railway traffic events.

  17. Optimal remote preparation of arbitrary multi-qubit real-parameter states via two-qubit entangled states

    NASA Astrophysics Data System (ADS)

    Wei, Jiahua; Shi, Lei; Luo, Junwen; Zhu, Yu; Kang, Qiaoyan; Yu, Longqiang; Wu, Hao; Jiang, Jun; Zhao, Boxin

    2018-06-01

    In this paper, we present an efficient scheme for remote state preparation of arbitrary n-qubit states with real coefficients. Quantum channel is composed of n maximally two-qubit entangled states, and several appropriate mutually orthogonal bases including the real parameters of prepared states are delicately constructed without the introduction of auxiliary particles. It is noted that the successful probability is 100% by using our proposal under the condition that the parameters of prepared states are all real. Compared to general states, the probability of our protocol is improved at the cost of the information reduction in the transmitted state.

  18. Nonrelativistic approaches derived from point-coupling relativistic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lourenco, O.; Dutra, M.; Delfino, A.

    2010-03-15

    We construct nonrelativistic versions of relativistic nonlinear hadronic point-coupling models, based on new normalized spinor wave functions after small component reduction. These expansions give us energy density functionals that can be compared to their relativistic counterparts. We show that the agreement between the nonrelativistic limit approach and the Skyrme parametrizations becomes strongly dependent on the incompressibility of each model. We also show that the particular case A=B=0 (Walecka model) leads to the same energy density functional of the Skyrme parametrizations SV and ZR2, while the truncation scheme, up to order {rho}{sup 3}, leads to parametrizations for which {sigma}=1.

  19. Enhanced force sensitivity and noise squeezing in an electromechanical resonator coupled to a nanotransistor

    NASA Astrophysics Data System (ADS)

    Mahboob, I.; Flurin, E.; Nishiguchi, K.; Fujiwara, A.; Yamaguchi, H.

    2010-12-01

    A nanofield-effect transistor (nano-FET) is coupled to a massive piezoelectricity based electromechanical resonator integrated with a parametric amplifier. The mechanical parametric amplifier can enhance the resonator's displacement and the resulting electrical signal is further amplified by the nano-FET. This hybrid amplification scheme yields an increase in the mechanical displacement signal by 70 dB resulting in a force sensitivity of 200 aN Hz-1/2 at 3 K. The mechanical parametric amplifier can also squeeze the displacement noise in one oscillation phase by 5 dB enabling a factor of 4 reduction in the thermomechanical noise force level.

  20. Biophysical risks to carbon sequestration and storage in Australian drylands.

    PubMed

    Nolan, Rachael H; Sinclair, Jennifer; Eldridge, David J; Ramp, Daniel

    2018-02-15

    Carbon abatement schemes that reduce land clearing and promote revegetation are now an important component of climate change policy globally. There is considerable potential for these schemes to operate in drylands which are spatially extensive. However, projects in these environments risk failure through unplanned release of stored carbon to the atmosphere. In this review, we identify factors that may adversely affect the success of vegetation-based carbon abatement projects in dryland ecosystems, evaluate their likelihood of occurrence, and estimate the potential consequences for carbon storage and sequestration. We also evaluate management strategies to reduce risks posed to these carbon abatement projects. Identified risks were primarily disturbances, including unplanned fire, drought, and grazing. Revegetation projects also risk recruitment failure, thereby failing to reach projected rates of sequestration. Many of these risks are dependent on rainfall, which is highly variable in drylands and susceptible to further variation under climate change. Resprouting vegetation is likely to be less vulnerable to disturbance and have faster recovery rates upon release from disturbance. We conclude that there is a strong impetus for identifying management strategies and risk reduction mechanisms for carbon abatement projects. Risk mitigation would be enhanced by effective co-ordination of mitigation strategies at scales larger than individual abatement project boundaries, and by implementing risk assessment throughout project planning and implementation stages. Reduction of risk is vital for maximising carbon sequestration of individual projects and for reducing barriers to the establishment of new projects entering the market. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Limited effect of anthropogenic nitrogen oxides on Secondary Organic Aerosol formation

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Unger, N.; Hodzic, A.; Emmons, L.; Knote, C.; Tilmes, S.; Lamarque, J.-F.; Yu, P.

    2015-08-01

    Globally, secondary organic aerosol (SOA) is mostly formed from emissions of biogenic volatile organic compounds (VOCs) by vegetation, but can be modified by human activities as demonstrated in recent research. Specifically, nitrogen oxides (NOx = NO + NO2) have been shown to play a critical role in the chemical formation of low volatility compounds. We have updated the SOA scheme in the global NCAR Community Atmospheric Model version 4 with chemistry (CAM4-chem) by implementing a 4-product Volatility Basis Set (VBS) scheme, including NOx-dependent SOA yields and aging parameterizations. The predicted organic aerosol amounts capture both the magnitude and distribution of US surface annual mean measurements from the Interagency Monitoring of Protected Visual Environments (IMPROVE) network by 50 %, and the simulated vertical profiles are within a factor of two compared to Aerosol Mass Spectrometer (AMS) measurements from 13 aircraft-based field campaigns across different region and seasons. We then perform sensitivity experiments to examine how the SOA loading responds to a 50 % reduction in anthropogenic nitric oxide (NO) emissions in different regions. We find limited SOA reductions of 0.9 to 5.6, 6.4 to 12.0 and 0.9 to 2.8 % for global, the southeast US and the Amazon NOx perturbations, respectively. The fact that SOA formation is almost unaffected by changes in NOx can be largely attributed to buffering in chemical pathways (low- and high-NOx pathways, O3 versus NO3-initiated oxidation) and to offsetting tendencies in the biogenic versus anthropogenic SOA responses.

  2. Comparison of wavelet based denoising schemes for gear condition monitoring: An Artificial Neural Network based Approach

    NASA Astrophysics Data System (ADS)

    Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva

    2018-02-01

    Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.

  3. Implementation analysis of RC5 algorithm on Preneel-Govaerts-Vandewalle (PGV) hashing schemes using length extension attack

    NASA Astrophysics Data System (ADS)

    Siswantyo, Sepha; Susanti, Bety Hayat

    2016-02-01

    Preneel-Govaerts-Vandewalle (PGV) schemes consist of 64 possible single-block-length schemes that can be used to build a hash function based on block ciphers. For those 64 schemes, Preneel claimed that 4 schemes are secure. In this paper, we apply length extension attack on those 4 secure PGV schemes which use RC5 algorithm in its basic construction to test their collision resistance property. The attack result shows that the collision occurred on those 4 secure PGV schemes. Based on the analysis, we indicate that Feistel structure and data dependent rotation operation in RC5 algorithm, XOR operations on the scheme, along with selection of additional message block value also give impact on the collision to occur.

  4. Bandwidth reduction for video-on-demand broadcasting using secondary content insertion

    NASA Astrophysics Data System (ADS)

    Golynski, Alexander; Lopez-Ortiz, Alejandro; Poirier, Guillaume; Quimper, Claude-Guy

    2005-01-01

    An optimal broadcasting scheme under the presence of secondary content (i.e. advertisements) is proposed. The proposed scheme works both for movies encoded in a Constant Bit Rate (CBR) or a Variable Bit Rate (VBR) format. It is shown experimentally that secondary content in movies can make Video-on-Demand (VoD) broadcasting systems more efficient. An efficient algorithm is given to compute the optimal broadcasting schedule with secondary content, which in particular significantly improves over the best previously known algorithm for computing the optimal broadcasting schedule without secondary content.

  5. Air cooling of disk of a solid integrally cast turbine rotor for an automotive gas turbine

    NASA Technical Reports Server (NTRS)

    Gladden, H. J.

    1977-01-01

    A thermal analysis is made of surface cooling of a solid, integrally cast turbine rotor disk for an automotive gas turbine engine. Air purge and impingement cooling schemes are considered and compared with an uncooled reference case. Substantial reductions in blade temperature are predicted with each of the cooling schemes studied. It is shown that air cooling can result in a substantial gain in the stress-rupture life of the blade. Alternatively, increases in the turbine inlet temperature are possible.

  6. Enhancing the LVRT Capability of PMSG-Based Wind Turbines Based on R-SFCL

    NASA Astrophysics Data System (ADS)

    Xu, Lin; Lin, Ruixing; Ding, Lijie; Huang, Chunjun

    2018-03-01

    A novel low voltage ride-through (LVRT) scheme for PMSG-based wind turbines based on the Resistor Superconducting Fault Current Limiter (R-SFCL) is proposed in this paper. The LVRT scheme is mainly formed by R-SFCL in series between the transformer and the Grid Side Converter (GSC), and basic modelling has been discussed in detail. The proposed LVRT scheme is implemented to interact with PMSG model in PSCAD/EMTDC under three phase short circuit fault condition, which proves that the proposed scheme based on R-SFCL can improve the transient performance and LVRT capability to consolidate grid connection with wind turbines.

  7. A robust anonymous biometric-based authenticated key agreement scheme for multi-server environments

    PubMed Central

    Huang, Yuanfei; Ma, Fangchao

    2017-01-01

    In order to improve the security in remote authentication systems, numerous biometric-based authentication schemes using smart cards have been proposed. Recently, Moon et al. presented an authentication scheme to remedy the flaws of Lu et al.’s scheme, and claimed that their improved protocol supports the required security properties. Unfortunately, we found that Moon et al.’s scheme still has weaknesses. In this paper, we show that Moon et al.’s scheme is vulnerable to insider attack, server spoofing attack, user impersonation attack and guessing attack. Furthermore, we propose a robust anonymous multi-server authentication scheme using public key encryption to remove the aforementioned problems. From the subsequent formal and informal security analysis, we demonstrate that our proposed scheme provides strong mutual authentication and satisfies the desirable security requirements. The functional and performance analysis shows that the improved scheme has the best secure functionality and is computational efficient. PMID:29121050

  8. A robust anonymous biometric-based authenticated key agreement scheme for multi-server environments.

    PubMed

    Guo, Hua; Wang, Pei; Zhang, Xiyong; Huang, Yuanfei; Ma, Fangchao

    2017-01-01

    In order to improve the security in remote authentication systems, numerous biometric-based authentication schemes using smart cards have been proposed. Recently, Moon et al. presented an authentication scheme to remedy the flaws of Lu et al.'s scheme, and claimed that their improved protocol supports the required security properties. Unfortunately, we found that Moon et al.'s scheme still has weaknesses. In this paper, we show that Moon et al.'s scheme is vulnerable to insider attack, server spoofing attack, user impersonation attack and guessing attack. Furthermore, we propose a robust anonymous multi-server authentication scheme using public key encryption to remove the aforementioned problems. From the subsequent formal and informal security analysis, we demonstrate that our proposed scheme provides strong mutual authentication and satisfies the desirable security requirements. The functional and performance analysis shows that the improved scheme has the best secure functionality and is computational efficient.

  9. Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.

    PubMed

    Li, Qiang; Doi, Kunio

    2006-04-01

    Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.

  10. An Efficient Quantum Somewhat Homomorphic Symmetric Searchable Encryption

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoqiang; Wang, Ting; Sun, Zhiwei; Wang, Ping; Yu, Jianping; Xie, Weixin

    2017-04-01

    In 2009, Gentry first introduced an ideal lattices fully homomorphic encryption (FHE) scheme. Later, based on the approximate greatest common divisor problem, learning with errors problem or learning with errors over rings problem, FHE has developed rapidly, along with the low efficiency and computational security. Combined with quantum mechanics, Liang proposed a symmetric quantum somewhat homomorphic encryption (QSHE) scheme based on quantum one-time pad, which is unconditional security. And it was converted to a quantum fully homomorphic encryption scheme, whose evaluation algorithm is based on the secret key. Compared with Liang's QSHE scheme, we propose a more efficient QSHE scheme for classical input states with perfect security, which is used to encrypt the classical message, and the secret key is not required in the evaluation algorithm. Furthermore, an efficient symmetric searchable encryption (SSE) scheme is constructed based on our QSHE scheme. SSE is important in the cloud storage, which allows users to offload search queries to the untrusted cloud. Then the cloud is responsible for returning encrypted files that match search queries (also encrypted), which protects users' privacy.

  11. Integrating AlGaN/GaN high electron mobility transistor with Si: A comparative study of integration schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, Nagaboopathy; Raghavan, Srinivasan; Centre for Nano Science and Engineering, Indian Institute of Science, Bangalore 560012

    2015-10-07

    AlGaN/GaN high electron mobility transistor stacks deposited on a single growth platform are used to compare the most common transition, AlN to GaN, schemes used for integrating GaN with Si. The efficiency of these transitions based on linearly graded, step graded, interlayer, and superlattice schemes on dislocation density reduction, stress management, surface roughness, and eventually mobility of the 2D-gas are evaluated. In a 500 nm GaN probe layer deposited, all of these transitions result in total transmission electron microscopy measured dislocations densities of 1 to 3 × 10{sup 9}/cm{sup 2} and <1 nm surface roughness. The 2-D electron gas channels formed atmore » an AlGaN-1 nm AlN/GaN interface deposited on this GaN probe layer all have mobilities of 1600–1900 cm{sup 2}/V s at a carrier concentration of 0.7–0.9 × 10{sup 13}/cm{sup 2}. Compressive stress and changes in composition in GaN rich regions of the AlN-GaN transition are the most effective at reducing dislocation density. Amongst all the transitions studied the step graded transition is the one that helps to implement this feature of GaN integration in the simplest and most consistent manner.« less

  12. Elastic all-optical multi-hop interconnection in data centers with adaptive spectrum allocation

    NASA Astrophysics Data System (ADS)

    Hong, Yuanyuan; Hong, Xuezhi; Chen, Jiajia; He, Sailing

    2017-01-01

    In this paper, a novel flex-grid all-optical interconnect scheme that supports transparent multi-hop connections in data centers is proposed. An inter-rack all-optical multi-hop connection is realized with an optical loop employed at flex-grid wavelength selective switches (WSSs) in an intermediate rack rather than by relaying through optical-electric-optical (O-E-O) conversions. Compared with the conventional O-E-O based approach, the proposed all-optical scheme is able to off-load the traffic at intermediate racks, leading to a reduction of the power consumption and cost. The transmission performance of the proposed flex-grid multi-hop all-optical interconnect scheme with various modulation formats, including both coherently detected and directly detected approaches, are investigated by Monte-Carlo simulations. To enhance the spectrum efficiency (SE), number-of-hop adaptive bandwidth allocation is introduced. Numerical results show that the SE can be improved by up to 33.3% at 40 Gbps, and by up to 25% at 100 Gbps. The impact of parameters, such as targeted bit error rate (BER) level and insertion loss of components, on the transmission performance of the proposed approach are also explored. The results show that the maximum SE improvement of the adaptive approach over the non-adaptive one is enhanced with the decrease of the targeted BER levels and the component insertion loss.

  13. Dust emission parameterization scheme over the MENA region: Sensitivity analysis to soil moisture and soil texture

    NASA Astrophysics Data System (ADS)

    Gherboudj, Imen; Beegum, S. Naseema; Marticorena, Beatrice; Ghedira, Hosni

    2015-10-01

    The mineral dust emissions from arid/semiarid soils were simulated over the MENA (Middle East and North Africa) region using the dust parameterization scheme proposed by Alfaro and Gomes (2001), to quantify the effect of the soil moisture and clay fraction in the emissions. For this purpose, an extensive data set of Soil Moisture and Ocean Salinity soil moisture, European Centre for Medium-Range Weather Forecasting wind speed at 10 m height, Food Agricultural Organization soil texture maps, MODIS (Moderate Resolution Imaging Spectroradiometer) Normalized Difference Vegetation Index, and erodibility of the soil surface were collected for the a period of 3 years, from 2010 to 2013. Though the considered data sets have different temporal and spatial resolution, efforts have been made to make them consistent in time and space. At first, the simulated sandblasting flux over the region were validated qualitatively using MODIS Deep Blue aerosol optical depth and EUMETSAT MSG (Meteosat Seciond Generation) dust product from SEVIRI (Meteosat Spinning Enhanced Visible and Infrared Imager) and quantitatively based on the available ground-based measurements of near-surface particulate mass concentrations (PM10) collected over four stations in the MENA region. Sensitivity analyses were performed to investigate the effect of soil moisture and clay fraction on the emissions flux. The results showed that soil moisture and soil texture have significant roles in the dust emissions over the MENA region, particularly over the Arabian Peninsula. An inversely proportional dependency is observed between the soil moisture and the sandblasting flux, where a steep reduction in flux is observed at low friction velocity and a gradual reduction is observed at high friction velocity. Conversely, a directly proportional dependency is observed between the soil clay fraction and the sandblasting flux where a steep increase in flux is observed at low friction velocity and a gradual increase is observed at high friction velocity. The magnitude of the percentage reduction/increase in the sandblasting flux decreases with the increase of the friction velocity for both soil moisture and soil clay fraction. Furthermore, these variables are interdependent leading to a gradual decrease in the percentage increase in the sandblasting flux for higher soil moisture values.

  14. An Experimental Investigation of Unsteady Surface Pressure on an Airfoil in Turbulence

    NASA Technical Reports Server (NTRS)

    Mish, Patrick F.; Devenport, William J.

    2003-01-01

    Measurements of fluctuating surface pressure were made on a NACA 0015 airfoil immersed in grid generated turbulence. The airfoil model has a 2 ft chord and spans the 6 ft Virginia Tech Stability Wind Tunnel test section. Two grids were used to investigate the effects of turbulence length scale on the surface pressure response. A large grid which produced turbulence with an integral scale 13% of the chord and a smaller grid which produced turbulence with an integral scale 1.3% of the chord. Measurements were performed at angles of attack, alpha from 0 to 20 . An array of microphones mounted subsurface was used to measure the unsteady surface pressure. The goal of this measurement was to characterize the effects of angle of attack on the inviscid response. Lift spectra calculated from pressure measurements at each angle of attack revealed two distinct interaction regions; for omega(sub r) = omega b / U(sub infinity) is less than 10 a reduction in unsteady lift of up to 7 decibels (dB) occurs while an increase occurs for omega(sub r) is greater than 10 as the angle of attack is increased. The reduction in unsteady lift at low omega(sub r) with increasing angle of attack is a result that has never before been shown either experimentally or theoretically. The source of the reduction in lift spectral level appears to be closely related to the distortion of inflow turbulence based on analysis of surface pressure spanwise correlation length scales. Furthermore, while the distortion of the inflow appears to be critical in this experiment, this effect does not seem to be significant in larger integral scale (relative to the chord) flows based on the previous experimental work of McKeough suggesting the airfoils size relative to the inflow integral scale is critical in defining how the airfoil will respond under variation of angle of attack. A prediction scheme is developed that correctly accounts for the effects of distortion when the inflow integral scale is small relative to the airfoil chord. This scheme utilizes Rapid Distortion Theory to account for the distortion of the inflow with the distortion field modeled using a circular cylinder.

  15. A Quantum Multi-proxy Blind Signature Scheme Based on Genuine Four-Qubit Entangled State

    NASA Astrophysics Data System (ADS)

    Tian, Juan-Hong; Zhang, Jian-Zhong; Li, Yan-Ping

    2016-02-01

    In this paper, we propose a multi-proxy blind signature scheme based on controlled teleportation. Genuine four-qubit entangled state functions as quantum channel. The scheme uses the physical characteristics of quantum mechanics to implement delegation, signature and verification. The security analysis shows the scheme satisfies the security features of multi-proxy signature, unforgeability, undeniability, blindness and unconditional security.

  16. Flexible $$I_{Q}\\!\\!-\\!\\!V$$ Scheme of a DFIG for Rapid Voltage Regulation of a Wind Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jinho; Muljadi, Eduard; Park, Jung -Wook

    This paper proposes a flexible reactive current-to-voltage (I Q-V) scheme of a doubly-fed induction generator (DFIG) for the rapid voltage regulation of a wind power plant (WPP). In the proposed scheme, the WPP controller dispatches different voltage set points to the DFIGs depending on their rotor voltage margins. The DFIGs inject different reactive power with the flexible I Q-V schemes implemented in the rotor-side and grid-side converters. The I Q-V characteristic, which consists of the gain and width of a linear band and I Q capability, varies with time depending on the I Q capability of the converters and amore » voltage dip at the point of interconnection (POI). To increase the I Q capability during a fault, the active current is reduced in proportion to a voltage dip. If the I Q capability and/or the POI voltage dip are large, the I Q-V gain is set to be high, thereby providing rapid voltage regulation. To avoid an overvoltage after the fault clearance, a rapid I Q reduction scheme is implemented in the WPP and DFIG controllers. The performance of the proposed flexible scheme was verified under scenarios with various disturbances. In conclusion, the proposed scheme can help increase wind power penetration without jeopardizing voltage stability.« less

  17. Flexible $$I_{Q}\\!\\!-\\!\\!V$$ Scheme of a DFIG for Rapid Voltage Regulation of a Wind Power Plant

    DOE PAGES

    Kim, Jinho; Muljadi, Eduard; Park, Jung -Wook; ...

    2017-04-28

    This paper proposes a flexible reactive current-to-voltage (I Q-V) scheme of a doubly-fed induction generator (DFIG) for the rapid voltage regulation of a wind power plant (WPP). In the proposed scheme, the WPP controller dispatches different voltage set points to the DFIGs depending on their rotor voltage margins. The DFIGs inject different reactive power with the flexible I Q-V schemes implemented in the rotor-side and grid-side converters. The I Q-V characteristic, which consists of the gain and width of a linear band and I Q capability, varies with time depending on the I Q capability of the converters and amore » voltage dip at the point of interconnection (POI). To increase the I Q capability during a fault, the active current is reduced in proportion to a voltage dip. If the I Q capability and/or the POI voltage dip are large, the I Q-V gain is set to be high, thereby providing rapid voltage regulation. To avoid an overvoltage after the fault clearance, a rapid I Q reduction scheme is implemented in the WPP and DFIG controllers. The performance of the proposed flexible scheme was verified under scenarios with various disturbances. In conclusion, the proposed scheme can help increase wind power penetration without jeopardizing voltage stability.« less

  18. Fully refocused multi-shot spatiotemporally encoded MRI: robust imaging in the presence of metallic implants.

    PubMed

    Ben-Eliezer, Noam; Solomon, Eddy; Harel, Elad; Nevo, Nava; Frydman, Lucio

    2012-12-01

    An approach has been recently introduced for acquiring arbitrary 2D NMR spectra or images in a single scan, based on the use of frequency-swept RF pulses for the sequential excitation and acquisition of the spins response. This spatiotemporal-encoding (SPEN) approach enables a unique, voxel-by-voxel refocusing of all frequency shifts in the sample, for all instants throughout the data acquisition. The present study investigates the use of this full-refocusing aspect of SPEN-based imaging in the multi-shot MRI of objects, subject to sizable field inhomogeneities that complicate conventional imaging approaches. 2D MRI experiments were performed at 7 T on phantoms and on mice in vivo, focusing on imaging in proximity to metallic objects. Fully refocused SPEN-based spin echo imaging sequences were implemented, using both Cartesian and back-projection trajectories, and compared with k-space encoded spin echo imaging schemes collected on identical samples under equal bandwidths and acquisition timing conditions. In all cases assayed, the fully refocused spatiotemporally encoded experiments evidenced a ca. 50 % reduction in signal dephasing in the proximity of the metal, as compared to analogous results stemming from the k-space encoded spin echo counterparts. The results in this study suggest that SPEN-based acquisition schemes carry the potential to overcome strong field inhomogeneities, of the kind that currently preclude high-field, high-resolution tissue characterizations in the neighborhood of metallic implants.

  19. Detection of proteins and bacteria using an array of feedback capacitance sensors.

    PubMed

    Mehta, Manav; Hanumanthaiah, Chandra Sekar; Betala, Pravin Ajitkumar; Zhang, Hong; Roh, SaeWeon; Buttner, William; Penrose, William R; Stetter, Joseph R; Pérez-Luna, Victor H

    2007-12-15

    An integrated array of micron-dimension capacitors, originally developed for biometric applications (fingerprint identification), was engineered for detection of biological agents such as proteins and bacteria. This device consists of an array of 93,184 (256 x 364) individual capacitor-based sensing elements located underneath a thin (0.8 microm) layer of glass. This glass layer can be functionalized with organosilane-based monolayers to provide groups amenable for the immobilization of bioreceptors such as antibodies, enzymes, peptides, aptamers, and nucleotides. Upon functionalization with antibodies and in conjunction with signal amplification schemes that result in perturbation of the dielectric constant around the captured antigens, this system can be used as a detector of biological agents. Two signal amplification schemes were tested in this work: one consisted of 4 microm diameter latex immunobeads and a second one was based on colloidal gold catalyzed reduction of silver. These signal amplification approaches were demonstrated and show that this system is capable of specific detection of bacteria (Escherichia coli) and proteins (ovalbumin). The present work shows proof-of-principle demonstration that a simple fingerprint detector based on feedback capacitance measurements can be implemented as a biosensor. The approach presented could be easily expanded to simultaneously test for a large number of analytes and multiple samples given that this device has a large number of detectors. The device and required instrumentation is highly portable and does not require expensive and bulky instrumentation because it relies purely on electronic detection.

  20. A multihop key agreement scheme for wireless ad hoc networks based on channel characteristics.

    PubMed

    Hao, Zhuo; Zhong, Sheng; Yu, Nenghai

    2013-01-01

    A number of key agreement schemes based on wireless channel characteristics have been proposed recently. However, previous key agreement schemes require that two nodes which need to agree on a key are within the communication range of each other. Hence, they are not suitable for multihop wireless networks, in which nodes do not always have direct connections with each other. In this paper, we first propose a basic multihop key agreement scheme for wireless ad hoc networks. The proposed basic scheme is resistant to external eavesdroppers. Nevertheless, this basic scheme is not secure when there exist internal eavesdroppers or Man-in-the-Middle (MITM) adversaries. In order to cope with these adversaries, we propose an improved multihop key agreement scheme. We show that the improved scheme is secure against internal eavesdroppers and MITM adversaries in a single path. Both performance analysis and simulation results demonstrate that the improved scheme is efficient. Consequently, the improved key agreement scheme is suitable for multihop wireless ad hoc networks.

  1. A Multihop Key Agreement Scheme for Wireless Ad Hoc Networks Based on Channel Characteristics

    PubMed Central

    Yu, Nenghai

    2013-01-01

    A number of key agreement schemes based on wireless channel characteristics have been proposed recently. However, previous key agreement schemes require that two nodes which need to agree on a key are within the communication range of each other. Hence, they are not suitable for multihop wireless networks, in which nodes do not always have direct connections with each other. In this paper, we first propose a basic multihop key agreement scheme for wireless ad hoc networks. The proposed basic scheme is resistant to external eavesdroppers. Nevertheless, this basic scheme is not secure when there exist internal eavesdroppers or Man-in-the-Middle (MITM) adversaries. In order to cope with these adversaries, we propose an improved multihop key agreement scheme. We show that the improved scheme is secure against internal eavesdroppers and MITM adversaries in a single path. Both performance analysis and simulation results demonstrate that the improved scheme is efficient. Consequently, the improved key agreement scheme is suitable for multihop wireless ad hoc networks. PMID:23766725

  2. Neural Network-Based Sensor Validation for Turboshaft Engines

    NASA Technical Reports Server (NTRS)

    Moller, James C.; Litt, Jonathan S.; Guo, Ten-Huei

    1998-01-01

    Sensor failure detection, isolation, and accommodation using a neural network approach is described. An auto-associative neural network is configured to perform dimensionality reduction on the sensor measurement vector and provide estimated sensor values. The sensor validation scheme is applied in a simulation of the T700 turboshaft engine in closed loop operation. Performance is evaluated based on the ability to detect faults correctly and maintain stable and responsive engine operation. The set of sensor outputs used for engine control forms the network input vector. Analytical redundancy is verified by training networks of successively smaller bottleneck layer sizes. Training data generation and strategy are discussed. The engine maintained stable behavior in the presence of sensor hard failures. With proper selection of fault determination thresholds, stability was maintained in the presence of sensor soft failures.

  3. Carte du Ciel, San Fernando zone

    NASA Astrophysics Data System (ADS)

    Abad, C.

    2014-06-01

    An updated summary of a future large astrometric catalogue is presented, based on the two most important astrometric projects carried out by the Real Instituto y Observatorio de la Armada de San Fernando (ROA). The goal is to make a catalogue of positions and proper motions based on ROA's Cart du Ciel (CdC) and the Astrographic Catalogue (AC) San Fernando zone plates, and the HAMC2 meridian circle catalogue. The CdC and AC plates are being reduced together to provide first-epoch positions while HAMC2 will provide second-epoch ones. New techniques have been applied, that range from using a commercial flatbed scanner to the proper reduction schemes to avoid systematics from it. Only thirty plates (out of 540) remain to be processed, due to scanning problems that are being solved.

  4. Metasurface-based anti-reflection coatings at optical frequencies

    NASA Astrophysics Data System (ADS)

    Monti, Alessio; Alù, Andrea; Toscano, Alessandro; Bilotti, Filiberto

    2018-05-01

    In this manuscript, we propose a metasurface approach for the reduction of electromagnetic reflection from an arbitrary air‑dielectric interface. The proposed technique exploits the exotic optical response of plasmonic nanoparticles to achieve complete cancellation of the field reflected by a dielectric substrate by means of destructive interference. Differently from other, earlier anti-reflection approaches based on nanoparticles, our design scheme is supported by a simple transmission-line formulation that allows a closed-form characterization of the anti-reflection performance of a nanoparticle array. Furthermore, since the working principle of the proposed devices relies on an average effect that does not critically depend on the array geometry, our approach enables low-cost production and easy scalability to large sizes. Our theoretical considerations are supported by full-wave simulations confirming the effectiveness of this design principle.

  5. Comparison of two SVD-based color image compression schemes.

    PubMed

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

  6. Comparison of two SVD-based color image compression schemes

    PubMed Central

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451

  7. A Target Coverage Scheduling Scheme Based on Genetic Algorithms in Directional Sensor Networks

    PubMed Central

    Gil, Joon-Min; Han, Youn-Hee

    2011-01-01

    As a promising tool for monitoring the physical world, directional sensor networks (DSNs) consisting of a large number of directional sensors are attracting increasing attention. As directional sensors in DSNs have limited battery power and restricted angles of sensing range, maximizing the network lifetime while monitoring all the targets in a given area remains a challenge. A major technique to conserve the energy of directional sensors is to use a node wake-up scheduling protocol by which some sensors remain active to provide sensing services, while the others are inactive to conserve their energy. In this paper, we first address a Maximum Set Covers for DSNs (MSCD) problem, which is known to be NP-complete, and present a greedy algorithm-based target coverage scheduling scheme that can solve this problem by heuristics. This scheme is used as a baseline for comparison. We then propose a target coverage scheduling scheme based on a genetic algorithm that can find the optimal cover sets to extend the network lifetime while monitoring all targets by the evolutionary global search technique. To verify and evaluate these schemes, we conducted simulations and showed that the schemes can contribute to extending the network lifetime. Simulation results indicated that the genetic algorithm-based scheduling scheme had better performance than the greedy algorithm-based scheme in terms of maximizing network lifetime. PMID:22319387

  8. Could the employment-based targeting approach serve Egypt in moving towards a social health insurance model?

    PubMed

    Shawky, S

    2010-06-01

    The current health insurance system in Egypt targets the productive population through an employment-based scheme bounded by a cost ceiling and focusing on curative care. Egypt Social Contract Survey data from 2005 were used to evaluate the impact of the employment-based scheme on health system accessibility and financing. Only 22.8% of the population in the productive age range (19-59 years) benefited from any health insurance scheme. The employment-based scheme covered 39.3% of the working population and was skewed towards urban areas, older people, females and the wealthier. It did not increase service utilization, but reduced out-of-pocket expenditure. Egypt should blend all health insurance schemes and adopt an innovative approach to reach universal coverage.

  9. Quantum attack-resistent certificateless multi-receiver signcryption scheme.

    PubMed

    Li, Huixian; Chen, Xubao; Pang, Liaojun; Shi, Weisong

    2013-01-01

    The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC), which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC) based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ) problem and its unforgeability under the Isomorphism of Polynomials (IP) assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards.

  10. A splitting scheme based on the space-time CE/SE method for solving multi-dimensional hydrodynamical models of semiconductor devices

    NASA Astrophysics Data System (ADS)

    Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul

    2016-08-01

    Numerical solutions of the hydrodynamical model of semiconductor devices are presented in one and two-space dimension. The model describes the charge transport in semiconductor devices. Mathematically, the models can be written as a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the conservation element and solution element (CE/SE) method for hyperbolic step, and a semi-implicit scheme for the relaxation step. The numerical results of the suggested scheme are compared with the splitting scheme based on Nessyahu-Tadmor (NT) central scheme for convection step and the same semi-implicit scheme for the relaxation step. The effects of various parameters such as low field mobility, device length, lattice temperature and voltages for one-space dimensional hydrodynamic model are explored to further validate the generic applicability of the CE/SE method for the current model equations. A two dimensional simulation is also performed by CE/SE method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.

  11. An enhanced biometric-based authentication scheme for telecare medicine information systems using elliptic curve cryptosystem.

    PubMed

    Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2015-03-01

    The telecare medical information systems (TMISs) enable patients to conveniently enjoy telecare services at home. The protection of patient's privacy is a key issue due to the openness of communication environment. Authentication as a typical approach is adopted to guarantee confidential and authorized interaction between the patient and remote server. In order to achieve the goals, numerous remote authentication schemes based on cryptography have been presented. Recently, Arshad et al. (J Med Syst 38(12): 2014) presented a secure and efficient three-factor authenticated key exchange scheme to remedy the weaknesses of Tan et al.'s scheme (J Med Syst 38(3): 2014). In this paper, we found that once a successful off-line password attack that results in an adversary could impersonate any user of the system in Arshad et al.'s scheme. In order to thwart these security attacks, an enhanced biometric and smart card based remote authentication scheme for TMISs is proposed. In addition, the BAN logic is applied to demonstrate the completeness of the enhanced scheme. Security and performance analyses show that our enhanced scheme satisfies more security properties and less computational cost compared with previously proposed schemes.

  12. Entropy-based gene ranking without selection bias for the predictive classification of microarray data.

    PubMed

    Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe

    2003-11-06

    We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  13. Generating highly accurate prediction hypotheses through collaborative ensemble learning

    NASA Astrophysics Data System (ADS)

    Arsov, Nino; Pavlovski, Martin; Basnarkov, Lasko; Kocarev, Ljupco

    2017-03-01

    Ensemble generation is a natural and convenient way of achieving better generalization performance of learning algorithms by gathering their predictive capabilities. Here, we nurture the idea of ensemble-based learning by combining bagging and boosting for the purpose of binary classification. Since the former improves stability through variance reduction, while the latter ameliorates overfitting, the outcome of a multi-model that combines both strives toward a comprehensive net-balancing of the bias-variance trade-off. To further improve this, we alter the bagged-boosting scheme by introducing collaboration between the multi-model’s constituent learners at various levels. This novel stability-guided classification scheme is delivered in two flavours: during or after the boosting process. Applied among a crowd of Gentle Boost ensembles, the ability of the two suggested algorithms to generalize is inspected by comparing them against Subbagging and Gentle Boost on various real-world datasets. In both cases, our models obtained a 40% generalization error decrease. But their true ability to capture details in data was revealed through their application for protein detection in texture analysis of gel electrophoresis images. They achieve improved performance of approximately 0.9773 AUROC when compared to the AUROC of 0.9574 obtained by an SVM based on recursive feature elimination.

  14. An active monitoring method for flood events

    NASA Astrophysics Data System (ADS)

    Chen, Zeqiang; Chen, Nengcheng; Du, Wenying; Gong, Jianya

    2018-07-01

    Timely and active detecting and monitoring of a flood event are critical for a quick response, effective decision-making and disaster reduction. To achieve the purpose, this paper proposes an active service framework for flood monitoring based on Sensor Web services and an active model for the concrete implementation of the active service framework. The framework consists of two core components-active warning and active planning. The active warning component is based on a publish-subscribe mechanism implemented by the Sensor Event Service. The active planning component employs the Sensor Planning Service to control the execution of the schemes and models and plans the model input data. The active model, called SMDSA, defines the quantitative calculation method for five elements, scheme, model, data, sensor, and auxiliary information, as well as their associations. Experimental monitoring of the Liangzi Lake flood in the summer of 2010 is conducted to test the proposed framework and model. The results show that 1) the proposed active service framework is efficient for timely and automated flood monitoring. 2) The active model, SMDSA, is a quantitative calculation method used to monitor floods from manual intervention to automatic computation. 3) As much preliminary work as possible should be done to take full advantage of the active service framework and the active model.

  15. Reactive granular optics for passive tracking of the sun

    NASA Astrophysics Data System (ADS)

    Frenkel, I.; Niv, A.

    2017-08-01

    The growing need for cost-effective renewable energy sources is hampered by the stagnation in solar cell technology, thus preventing a substantial reduction in the module and energy-production price. Lowering the energy-production cost could be achieved by using modules with efficiency. One of the possible means for increasing the module efficiency is concentrated photovoltaics (CPV). CPV, however, requires complex and accurate active tracking of the sun that reduces much of its cost-effectiveness. Here, we propose a passive tracking scheme based on a reactive optical device. The optical reaction is achieved by a new kind of light activated mechanical force that acts on micron-sized particles. This optical force allows the formation of granular disordered optical media that can be switched from being opaque to become transparent based on the intensity of light it interacts with. Such media gives rise to an efficient passive tracking scheme that when combined with an external optical cavity forms a new solar power conversion approach. Being external to the cell itself, this approach is indifferent to the type of semiconducting material that is used, as well as to other aspects of the cell design. This, in turn, liberates the cell layout from its optical constraints thus paving the way to higher efficiencies at lower module price.

  16. The Mexican Social Security counterreform: pensions for profit.

    PubMed

    Laurell, A C

    1999-01-01

    The social security counterreform, initiated in 1997, forms part of the neoliberal reorganization of Mexican society. The reform implies a profound change in the guiding principles of social security, as the public model based on integrality, solidarity, and redistribution is replaced by a model based on private administration of funds and services, individualization of entitlement, and reduction of rights. Its economic purpose is to move social services and benefits into the direct sphere of private capital accumulation. Although these changes will involve the whole social security system--old-age and disability pensions, health care, child care, and workers' compensation--they are most immediately evident in the pension scheme. The pay-as-you-go scheme is being replaced by privately managed individual retirement accounts which especially favor the big financial groups. These groups are gaining control over huge amounts of capital, are authorized to charge a high commission, and run no financial risks. The privatization of the system requires decisive state intervention with a legal change and a sizable state subsidy (1 to 1.5 percent of GNP) over five decades. The supposed positive impact on economic growth and employment is uncertain. A review of the new law and of the estimates of future annuities reveals shrinking pension coverage and inadequate incomes from pensions.

  17. Burst-mode optical label processor with ultralow power consumption.

    PubMed

    Ibrahim, Salah; Nakahara, Tatsushi; Ishikawa, Hiroshi; Takahashi, Ryo

    2016-04-04

    A novel label processor subsystem for 100-Gbps (25-Gbps × 4λs) burst-mode optical packets is developed, in which a highly energy-efficient method is pursued for extracting and interfacing the ultrafast packet-label to a CMOS-based processor where label recognition takes place. The method involves performing serial-to-parallel conversion for the label bits on a bit-by-bit basis by using an optoelectronic converter that is operated with a set of optical triggers generated in a burst-mode manner upon packet arrival. Here we present three key achievements that enabled a significant reduction in the total power consumption and latency of the whole subsystem; 1) based on a novel operation mechanism for providing amplification with bit-level selectivity, an optical trigger pulse generator, that consumes power for a very short duration upon packet arrival, is proposed and experimentally demonstrated, 2) the energy of optical triggers needed by the optoelectronic serial-to-parallel converter is reduced by utilizing a negative-polarity signal while employing an enhanced conversion scheme entitled the discharge-or-hold scheme, 3) the necessary optical trigger energy is further cut down by half by coupling the triggers through the chip's backside, whereas a novel lens-free packaging method is developed to enable a low-cost alignment process that works with simple visual observation.

  18. Computerized detection of unruptured aneurysms in MRA images: reduction of false positives using anatomical location features

    NASA Astrophysics Data System (ADS)

    Uchiyama, Yoshikazu; Gao, Xin; Hara, Takeshi; Fujita, Hiroshi; Ando, Hiromichi; Yamakawa, Hiroyasu; Asano, Takahiko; Kato, Hiroki; Iwama, Toru; Kanematsu, Masayuki; Hoshi, Hiroaki

    2008-03-01

    The detection of unruptured aneurysms is a major subject in magnetic resonance angiography (MRA). However, their accurate detection is often difficult because of the overlapping between the aneurysm and the adjacent vessels on maximum intensity projection images. The purpose of this study is to develop a computerized method for the detection of unruptured aneurysms in order to assist radiologists in image interpretation. The vessel regions were first segmented using gray-level thresholding and a region growing technique. The gradient concentration (GC) filter was then employed for the enhancement of the aneurysms. The initial candidates were identified in the GC image using a gray-level threshold. For the elimination of false positives (FPs), we determined shape features and an anatomical location feature. Finally, rule-based schemes and quadratic discriminant analysis were employed along with these features for distinguishing between the aneurysms and the FPs. The sensitivity for the detection of unruptured aneurysms was 90.0% with 1.52 FPs per patient. Our computerized scheme can be useful in assisting the radiologists in the detection of unruptured aneurysms in MRA images.

  19. iFeature: a python package and web server for features extraction and selection from protein and peptide sequences.

    PubMed

    Chen, Zhen; Zhao, Pei; Li, Fuyi; Leier, André; Marquez-Lago, Tatiana T; Wang, Yanan; Webb, Geoffrey I; Smith, A Ian; Daly, Roger J; Chou, Kuo-Chen; Song, Jiangning

    2018-03-08

    Structural and physiochemical descriptors extracted from sequence data have been widely used to represent sequences and predict structural, functional, expression and interaction profiles of proteins and peptides as well as DNAs/RNAs. Here, we present iFeature, a versatile Python-based toolkit for generating various numerical feature representation schemes for both protein and peptide sequences. iFeature is capable of calculating and extracting a comprehensive spectrum of 18 major sequence encoding schemes that encompass 53 different types of feature descriptors. It also allows users to extract specific amino acid properties from the AAindex database. Furthermore, iFeature integrates 12 different types of commonly used feature clustering, selection, and dimensionality reduction algorithms, greatly facilitating training, analysis, and benchmarking of machine-learning models. The functionality of iFeature is made freely available via an online web server and a stand-alone toolkit. http://iFeature.erc.monash.edu/; https://github.com/Superzchen/iFeature/. jiangning.song@monash.edu; kcchou@gordonlifescience.org; roger.daly@monash.edu. Supplementary data are available at Bioinformatics online.

  20. High-sensitivity bend angle measurements using optical fiber gratings.

    PubMed

    Rauf, Abdul; Zhao, Jianlin; Jiang, Biqiang

    2013-07-20

    We present a high-sensitivity and more flexible bend measurement method, which is based on the coupling of core mode to the cladding modes at the bending region in concatenation with optical fiber grating serving as band reflector. The characteristics of a bend sensing arm composed of bending region and optical fiber grating is examined for different configurations including single fiber Bragg grating (FBG), chirped FBG (CFBG), and double FBGs. The bend loss curves for coated, stripped, and etched sections of fiber in the bending region with FBG, CFBG, and double FBG are obtained experimentally. The effect of separation between bending region and optical fiber grating on loss is measured. The loss responses for single FBG and CFBG configurations are compared to discover the effectiveness for practical applications. It is demonstrated that the sensitivity of the double FBG scheme is twice that of the single FBG and CFBG configurations, and hence acts as sensitivity multiplier. The bend loss response for different fiber diameters obtained through etching in 40% hydrofluoric acid, is measured in double FBG scheme that resulted in a significant increase in the sensitivity, and reduction of dead-zone.

Top