Sample records for alternative scheme based

  1. Systems-based decomposition schemes for the approximate solution of multi-term fractional differential equations

    NASA Astrophysics Data System (ADS)

    Ford, Neville J.; Connolly, Joseph A.

    2009-07-01

    We give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of multi-term fractional differential equations using the Caputo form of the fractional derivative. The schemes we compare are based on conversion of the original problem into a system of equations. We review alternative approaches and consider how the most appropriate numerical scheme may be chosen to solve a particular equation.

  2. On the convergence of nonconvex minimization methods for image recovery.

    PubMed

    Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei

    2015-05-01

    Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.

  3. Alternating Direction Implicit (ADI) schemes for a PDE-based image osmosis model

    NASA Astrophysics Data System (ADS)

    Calatroni, L.; Estatico, C.; Garibaldi, N.; Parisotto, S.

    2017-10-01

    We consider Alternating Direction Implicit (ADI) splitting schemes to compute efficiently the numerical solution of the PDE osmosis model considered by Weickert et al. in [10] for several imaging applications. The discretised scheme is shown to preserve analogous properties to the continuous model. The dimensional splitting strategy traduces numerically into the solution of simple tridiagonal systems for which standard matrix factorisation techniques can be used to improve upon the performance of classical implicit methods, even for large time steps. Applications to the shadow removal problem are presented.

  4. Provably secure identity-based identification and signature schemes from code assumptions

    PubMed Central

    Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940

  5. Provably secure identity-based identification and signature schemes from code assumptions.

    PubMed

    Song, Bo; Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.

  6. Compiler-directed cache management in multiprocessors

    NASA Technical Reports Server (NTRS)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  7. Break-even cost of cloning in genetic improvement of dairy cattle.

    PubMed

    Dematawewa, C M; Berger, P J

    1998-04-01

    Twelve different models for alternative progeny-testing schemes based on genetic and economic gains were compared. The first 10 alternatives were considered to be optimally operating progeny-testing schemes. Alternatives 1 to 5 considered the following combinations of technologies: 1) artificial insemination, 2) artificial insemination with sexed semen, 3) artificial insemination with embryo transfer, 4) artificial insemination and embryo transfer with few bulls as sires, and 5) artificial insemination, embryo transfer, and sexed semen with few bulls, respectively. Alternatives 6 to 12 considered cloning from dams. Alternatives 11 and 12 considered a regular progeny-testing scheme that had selection gains (intensity x accuracy x genetic standard deviation) of 890, 300, 600, and 89 kg, respectively, for the four paths. The sums of the generation intervals of the four paths were 19 yr for the first 8 alternatives and 19.5, 22, 29, and 29.5 yr for alternatives 9 to 12, respectively. Rates of genetic gain in milk yield for alternatives 1 to 5 were 257, 281, 316, 327, and 340 kg/yr, respectively. The rate of gain for other alternatives increased as number of clones increased. The use of three records per clone increased both accuracy and generation interval of a path. Cloning was highly beneficial for progeny-testing schemes with lower intensity and accuracy of selection. The discounted economic gain (break-even cost) per clone was the highest ($84) at current selection levels using sexed semen and three records on clones of the dam. The total cost associated with cloning has to be below $84 for cloning to be an economically viable option.

  8. A secure and efficient password-based user authentication scheme using smart cards for the integrated EPR information system.

    PubMed

    Lee, Tian-Fu; Chang, I-Pin; Lin, Tsung-Hung; Wang, Ching-Cheng

    2013-06-01

    The integrated EPR information system supports convenient and rapid e-medicine services. A secure and efficient authentication scheme for the integrated EPR information system provides safeguarding patients' electronic patient records (EPRs) and helps health care workers and medical personnel to rapidly making correct clinical decisions. Recently, Wu et al. proposed an efficient password-based user authentication scheme using smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various malicious attacks. However, their scheme is still vulnerable to lost smart card and stolen verifier attacks. This investigation discusses these weaknesses and proposes a secure and efficient authentication scheme for the integrated EPR information system as alternative. Compared with related approaches, the proposed scheme not only retains a lower computational cost and does not require verifier tables for storing users' secrets, but also solves the security problems in previous schemes and withstands possible attacks.

  9. A digital memories based user authentication scheme with privacy preservation.

    PubMed

    Liu, JunLiang; Lyu, Qiuyun; Wang, Qiuhua; Yu, Xiangxiang

    2017-01-01

    The traditional username/password or PIN based authentication scheme, which still remains the most popular form of authentication, has been proved insecure, unmemorable and vulnerable to guessing, dictionary attack, key-logger, shoulder-surfing and social engineering. Based on this, a large number of new alternative methods have recently been proposed. However, most of them rely on users being able to accurately recall complex and unmemorable information or using extra hardware (such as a USB Key), which makes authentication more difficult and confusing. In this paper, we propose a Digital Memories based user authentication scheme adopting homomorphic encryption and a public key encryption design which can protect users' privacy effectively, prevent tracking and provide multi-level security in an Internet & IoT environment. Also, we prove the superior reliability and security of our scheme compared to other schemes and present a performance analysis and promising evaluation results.

  10. A digital memories based user authentication scheme with privacy preservation

    PubMed Central

    Liu, JunLiang; Lyu, Qiuyun; Wang, Qiuhua; Yu, Xiangxiang

    2017-01-01

    The traditional username/password or PIN based authentication scheme, which still remains the most popular form of authentication, has been proved insecure, unmemorable and vulnerable to guessing, dictionary attack, key-logger, shoulder-surfing and social engineering. Based on this, a large number of new alternative methods have recently been proposed. However, most of them rely on users being able to accurately recall complex and unmemorable information or using extra hardware (such as a USB Key), which makes authentication more difficult and confusing. In this paper, we propose a Digital Memories based user authentication scheme adopting homomorphic encryption and a public key encryption design which can protect users’ privacy effectively, prevent tracking and provide multi-level security in an Internet & IoT environment. Also, we prove the superior reliability and security of our scheme compared to other schemes and present a performance analysis and promising evaluation results. PMID:29190659

  11. VTOL shipboard letdown guidance system analysis

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.; Karmali, M. S.

    1983-01-01

    Alternative letdown guidance strategies are examined for landing of a VTOL aircraft onboard a small aviation ship under adverse environmental conditions. Off line computer simulation of shipboard landing task is utilized for assessing the relative merits of the proposed guidance schemes. The touchdown performance of a nominal constant rate of descent (CROD) letdown strategy serves as a benchmark for ranking the performance of the alternative letdown schemes. Analysis of ship motion time histories indicates the existence of an alternating sequence of quiescent and rough motions called lulls and swells. A real time algorithms lull/swell classification based upon ship motion pattern features is developed. The classification algorithm is used to command a go/no go signal to indicate the initiation and termination of an acceptable landing window. Simulation results show that such a go/no go pattern based letdown guidance strategy improves touchdown performance.

  12. An Innovative Approach to Scheme Learning Map Considering Tradeoff Multiple Objectives

    ERIC Educational Resources Information Center

    Lin, Yu-Shih; Chang, Yi-Chun; Chu, Chih-Ping

    2016-01-01

    An important issue in personalized learning is to provide learners with customized learning according to their learning characteristics. This paper focused attention on scheming learning map as follows. The learning goal can be achieved via different pathways based on alternative materials, which have the relationships of prerequisite, dependence,…

  13. Nuclear Explosion and Infrasound Event Resources of the SMDC Monitoring Research Program

    DTIC Science & Technology

    2008-09-01

    2008 Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring Technologies 928 Figure 7. Dozens of detected infrasound signals from...investigate alternative detection schemes at the two infrasound arrays based on frequency-wavenumber (fk) processing and the F-statistic. The results of... infrasound signal - detection processing schemes. REFERENCES Bahavar, M., B. Barker, J. Bennett, R. Bowman, H. Israelsson, B. Kohl, Y-L. Kung, J. Murphy

  14. Packaging waste prevention in the distribution of fruit and vegetables: An assessment based on the life cycle perspective.

    PubMed

    Tua, Camilla; Nessi, Simone; Rigamonti, Lucia; Dolci, Giovanni; Grosso, Mario

    2017-04-01

    In recent years, alternative food supply chains based on short distance production and delivery have been promoted as being more environmentally friendly than those applied by the traditional retailing system. An example is the supply of seasonal and possibly locally grown fruit and vegetables directly to customers inside a returnable crate (the so-called 'box scheme'). In addition to other claimed environmental and economic advantages, the box scheme is often listed among the packaging waste prevention measures. To check whether such a claim is soundly based, a life cycle assessment was carried out to verify the real environmental effectiveness of the box scheme in comparison to the Italian traditional distribution. The study focused on two reference products, carrots and apples, which are available in the crate all year round. An experience of a box scheme carried out in Italy was compared with some traditional scenarios where the product is distributed loose or packaged at the large-scale retail trade. The packaging waste generation, 13 impact indicators on environment and human health and energy consumptions were calculated. Results show that the analysed experience of the box scheme, as currently managed, cannot be considered a packaging waste prevention measure when compared with the traditional distribution of fruit and vegetables. The weaknesses of the alternative system were identified and some recommendations were given to improve its environmental performance.

  15. Comparative efficiency of a scheme of cyclic alternating-period subtraction

    NASA Astrophysics Data System (ADS)

    Golikov, V. S.; Artemenko, I. G.; Malinin, A. P.

    1986-06-01

    The estimation of the detection quality of a signal on a background of correlated noise according to the Neumann-Pearson criterion is examined. It is shown that, in a number of cases, the cyclic alternating-period subtraction scheme has a higher noise immunity than the conventional alternating-period subtraction scheme.

  16. Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach

    NASA Astrophysics Data System (ADS)

    Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun

    2015-02-01

    The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.

  17. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  18. New coherent laser communication detection scheme based on channel-switching method.

    PubMed

    Liu, Fuchuan; Sun, Jianfeng; Ma, Xiaoping; Hou, Peipei; Cai, Guangyu; Sun, Zhiwei; Lu, Zhiyong; Liu, Liren

    2015-04-01

    A new coherent laser communication detection scheme based on the channel-switching method is proposed. The detection front end of this scheme comprises a 90° optical hybrid and two balanced photodetectors which outputs the in-phase (I) channel and quadrature-phase (Q) channel signal current, respectively. With this method, the ultrahigh speed analog/digital transform of the signal of the I or Q channel is not required. The phase error between the signal and local lasers is obtained by simple analog circuit. Using the phase error signal, the signals of the I/Q channel are switched alternately. The principle of this detection scheme is presented. Moreover, the comparison of the sensitivity of this scheme with that of homodyne detection with an optical phase-locked loop is discussed. An experimental setup was constructed to verify the proposed detection scheme. The offline processing procedure and results are presented. This scheme could be realized through simple structure and has potential applications in cost-effective high-speed laser communication.

  19. Two-stage atlas subset selection in multi-atlas based image segmentation.

    PubMed

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.

  20. Delivering an Alternative Medicine Resource to the User's Desktop via World Wide Web.

    ERIC Educational Resources Information Center

    Li, Jie; Wu, Gang; Marks, Ellen; Fan, Weiyu

    1998-01-01

    Discusses the design and implementation of a World Wide Web-based alternative medicine virtual resource. This homepage integrates regional, national, and international resources and delivers library services to the user's desktop. Goals, structure, and organizational schemes of the system are detailed, and design issues for building such a…

  1. Quantum attack-resistent certificateless multi-receiver signcryption scheme.

    PubMed

    Li, Huixian; Chen, Xubao; Pang, Liaojun; Shi, Weisong

    2013-01-01

    The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC), which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC) based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ) problem and its unforgeability under the Isomorphism of Polynomials (IP) assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards.

  2. Low-Energy Truly Random Number Generation with Superparamagnetic Tunnel Junctions for Unconventional Computing

    NASA Astrophysics Data System (ADS)

    Vodenicarevic, D.; Locatelli, N.; Mizrahi, A.; Friedman, J. S.; Vincent, A. F.; Romera, M.; Fukushima, A.; Yakushiji, K.; Kubota, H.; Yuasa, S.; Tiwari, S.; Grollier, J.; Querlioz, D.

    2017-11-01

    Low-energy random number generation is critical for many emerging computing schemes proposed to complement or replace von Neumann architectures. However, current random number generators are always associated with an energy cost that is prohibitive for these computing schemes. We introduce random number bit generation based on specific nanodevices: superparamagnetic tunnel junctions. We experimentally demonstrate high-quality random bit generation that represents an orders-of-magnitude improvement in energy efficiency over current solutions. We show that the random generation speed improves with nanodevice scaling, and we investigate the impact of temperature, magnetic field, and cross talk. Finally, we show how alternative computing schemes can be implemented using superparamagentic tunnel junctions as random number generators. These results open the way for fabricating efficient hardware computing devices leveraging stochasticity, and they highlight an alternative use for emerging nanodevices.

  3. Women’s preferences for alternative financial incentive schemes for breastfeeding: A discrete choice experiment

    PubMed Central

    Anokye, Nana; de Bekker-Grob, Esther W.; Higgins, Ailish; Relton, Clare; Strong, Mark; Fox-Rushby, Julia

    2018-01-01

    Background Increasing breastfeeding rates have been associated with reductions in disease in babies and mothers as well as in related costs. ‘Nourishing Start for Health (NoSH)’, a financial incentive scheme has been proposed as a potentially effective way to increase both the number of mothers breastfeeding and duration of breastfeeding. Aims To establish women’s relative preferences for different aspects of a financial incentive scheme for breastfeeding and to identify importance of scheme characteristics on probability on participation in an incentive scheme. Methods A discrete choice experiment (DCE) obtained information on alternative specifications of the NoSH scheme designed to promote continued breastfeeding duration until at least 6 weeks after birth. Four attributes framed alternative scheme designs: value of the incentive; minimum breastfeeding duration required to receive incentive; method of verifying breastfeeding; type of incentive. Three versions of the DCE questionnaire, each containing 8 different choice sets, provided 24 choice sets for analysis. The questionnaire was mailed to 2,531 women in the South Yorkshire Cohort (SYC) aged 16–45 years in IMD quintiles 3–5. The analytic approach considered conditional and mixed effects logistic models to account for preference heterogeneity that may be associated with a variation in effects mediated by respondents’ characteristics. Results 564 women completed the questionnaire and a response rate of 22% was achieved. Most of the included attributes were found to affect utility and therefore the probability to participate in the incentive scheme. Higher rewards were preferred, although the type of incentive significantly affected women’s preferences on average. We found evidence for preference heterogeneity based on individual characteristics that mediated preferences for an incentive scheme.Conclusions Although participants’ opinion in our sample was mixed, financial incentives for breastfeeding may be an acceptable and effective instrument to change behaviour. However, individual characteristics could mediate the effect and should therefore be considered when developing and targeting future interventions. PMID:29649245

  4. Multi-criteria decision aid approach for the selection of the best compromise management scheme for ELVs: the case of Cyprus.

    PubMed

    Mergias, I; Moustakas, K; Papadopoulos, A; Loizidou, M

    2007-08-25

    Each alternative scheme for treating a vehicle at its end of life has its own consequences from a social, environmental, economic and technical point of view. Furthermore, the criteria used to determine these consequences are often contradictory and not equally important. In the presence of multiple conflicting criteria, an optimal alternative scheme never exists. A multiple-criteria decision aid (MCDA) method to aid the Decision Maker (DM) in selecting the best compromise scheme for the management of End-of-Life Vehicles (ELVs) is presented in this paper. The constitution of a set of alternatives schemes, the selection of a list of relevant criteria to evaluate these alternative schemes and the choice of an appropriate management system are also analyzed in this framework. The proposed procedure relies on the PROMETHEE method which belongs to the well-known family of multiple criteria outranking methods. For this purpose, level, linear and Gaussian functions are used as preference functions.

  5. Reconsideration of the scheme of the international classification of functioning, disability and health: incentives from the Netherlands for a global debate.

    PubMed

    Heerkens, Yvonne F; de Weerd, Marjolein; Huber, Machteld; de Brouwer, Carin P M; van der Veen, Sabina; Perenboom, Rom J M; van Gool, Coen H; Ten Napel, Huib; van Bon-Martens, Marja; Stallinga, Hillegonda A; van Meeteren, Nico L U

    2018-03-01

    The ICF (International Classification of Functioning, Disability and Health) framework (used worldwide to describe 'functioning' and 'disability'), including the ICF scheme (visualization of functioning as result of interaction with health condition and contextual factors), needs reconsideration. The purpose of this article is to discuss alternative ICF schemes. Reconsideration of ICF via literature review and discussions with 23 Dutch ICF experts. Twenty-six experts were invited to rank the three resulting alternative schemes. The literature review provided five themes: 1) societal developments; 2) health and research influences; 3) conceptualization of health; 4) models/frameworks of health and disability; and 5) ICF-criticism (e.g. position of 'health condition' at the top and role of 'contextual factors'). Experts concluded that the ICF scheme gives the impression that the medical perspective is dominant instead of the biopsychosocial perspective. Three alternative ICF schemes were ranked by 16 (62%) experts, resulting in one preferred scheme. There is a need for a new ICF scheme, better reflecting the ICF framework, for further (inter)national consideration. These Dutch schemes should be reviewed on a global scale, to develop a scheme that is more consistent with current and foreseen developments and changing ideas on health. Implications for Rehabilitation We propose policy makers on community, regional and (inter)national level to consider the use of the alternative schemes of the International Classification of Functioning, Disability and Health within their plans to promote functioning and health of their citizens and researchers and teachers to incorporate the alternative schemes into their research and education to emphasize the biopsychosocial paradigm. We propose to set up an international Delphi procedure involving citizens (including patients), experts in healthcare, occupational care, research, education and policy, and planning to get consensus on an alternative scheme of the International Classification of Functioning, Disability and Health. We recommend to discuss the alternatives for the present scheme of the International Classification of Functioning, Disability and Health in the present update and revision process within the World Health Organization as a part of the discussion on the future of the International Classification of Functioning, Disability and Health framework (including ontology, title and relation with the International Classification of Diseases). We recommend to revise the definition of personal factors and to draft a list of personal factors that can be used in policy making, clinical practice, research, and education and to put effort in the revision of the present list of environmental factors to make it more useful in, e.g., occupational health care.

  6. Vector quantization for efficient coding of upper subbands

    NASA Technical Reports Server (NTRS)

    Zeng, W. J.; Huang, Y. F.

    1994-01-01

    This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.

  7. 78 FR 60670 - Airworthiness Directives; The Boeing Company Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-02

    ... in this regard. Request To Approve an Alternate Generic Repair Scheme as an AMOC British Airways requested that an alternate generic repair scheme be approved as an AMOC to this final rule. British Airways... scheme to British Airways which allowed British Airways to manufacture certain repair parts. British...

  8. Nagy-Soper subtraction scheme for multiparton final states

    NASA Astrophysics Data System (ADS)

    Chung, Cheng-Han; Robens, Tania

    2013-04-01

    In this work, we present the extension of an alternative subtraction scheme for next-to-leading order QCD calculations to the case of an arbitrary number of massless final state partons. The scheme is based on the splitting kernels of an improved parton shower and comes with a reduced number of final state momentum mappings. While a previous publication including the setup of the scheme has been restricted to cases with maximally two massless partons in the final state, we here provide the final state real emission and integrated subtraction terms for processes with any number of massless partons. We apply our scheme to three jet production at lepton colliders at next-to-leading order and present results for the differential C parameter distribution.

  9. Microelectromechanical reprogrammable logic device.

    PubMed

    Hafiz, M A A; Kosuru, L; Younis, M I

    2016-03-29

    In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme.

  10. Microelectromechanical reprogrammable logic device

    PubMed Central

    Hafiz, M. A. A.; Kosuru, L.; Younis, M. I.

    2016-01-01

    In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme. PMID:27021295

  11. Quantum Attack-Resistent Certificateless Multi-Receiver Signcryption Scheme

    PubMed Central

    Li, Huixian; Chen, Xubao; Pang, Liaojun; Shi, Weisong

    2013-01-01

    The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC), which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC) based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ) problem and its unforgeability under the Isomorphism of Polynomials (IP) assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards. PMID:23967037

  12. Evaluation of a Multigrid Scheme for the Incompressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.

    2004-01-01

    A fast multigrid solver for the steady, incompressible Navier-Stokes equations is presented. The multigrid solver is based upon a factorizable discrete scheme for the velocity-pressure form of the Navier-Stokes equations. This scheme correctly distinguishes between the advection-diffusion and elliptic parts of the operator, allowing efficient smoothers to be constructed. To evaluate the multigrid algorithm, solutions are computed for flow over a flat plate, parabola, and a Karman-Trefftz airfoil. Both nonlifting and lifting airfoil flows are considered, with a Reynolds number range of 200 to 800. Convergence and accuracy of the algorithm are discussed. Using Gauss-Seidel line relaxation in alternating directions, multigrid convergence behavior approaching that of O(N) methods is achieved. The computational efficiency of the numerical scheme is compared with that of Runge-Kutta and implicit upwind based multigrid methods.

  13. Coherent receiver design based on digital signal processing in optical high-speed intersatellite links with M-phase-shift keying

    NASA Astrophysics Data System (ADS)

    Schaefer, Semjon; Gregory, Mark; Rosenkranz, Werner

    2016-11-01

    We present simulative and experimental investigations of different coherent receiver designs for high-speed optical intersatellite links. We focus on frequency offset (FO) compensation in homodyne and intradyne detection systems. The considered laser communication terminal uses an optical phase-locked loop (OPLL), which ensures stable homodyne detection. However, the hardware complexity increases with the modulation order. Therefore, we show that software-based intradyne detection is an attractive alternative for OPLL-based homodyne systems. Our approach is based on digital FO and phase noise compensation, in order to achieve a more flexible coherent detection scheme. Analytic results will further show the theoretical impact of the different detection schemes on the receiver sensitivity. Finally, we compare the schemes in terms of bit error ratio measurements and optimal receiver design.

  14. Using Simulations to Investigate the Longitudinal Stability of Alternative Schemes for Classifying and Identifying Children with Reading Disabilities

    ERIC Educational Resources Information Center

    Schatschneider, Christopher; Wagner, Richard K.; Hart, Sara A.; Tighe, Elizabeth L.

    2016-01-01

    The present study employed data simulation techniques to investigate the 1-year stability of alternative classification schemes for identifying children with reading disabilities. Classification schemes investigated include low performance, unexpected low performance, dual-discrepancy, and a rudimentary form of constellation model of reading…

  15. New LNG process scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foglietta, J.H.

    1999-07-01

    A new LNG cycle has been developed for base load liquefaction facilities. This new design offers a different technical and economical solution comparing in efficiency with the classical technologies. The new LNG scheme could offer attractive business opportunities to oil and gas companies that are trying to find paths to monetize gas sources more effectively; particularly for remote or offshore locations where smaller scale LNG facilities might be applicable. This design offers also an alternative route to classic LNG projects, as well as alternative fuel sources. Conceived to offer simplicity and access to industry standard equipment, This design is amore » hybrid result of combining a standard refrigeration system and turboexpander technology.« less

  16. Passive and active plasma deceleration for the compact disposal of electron beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonatto, A., E-mail: abonatto@lbl.gov; CAPES Foundation, Ministry of Education of Brazil, Brasília, DF 700040-020; Schroeder, C. B.

    2015-08-15

    Plasma-based decelerating schemes are investigated as compact alternatives for the disposal of high-energy beams (beam dumps). Analytical solutions for the energy loss of electron beams propagating in passive and active (laser-driven) schemes are derived. These solutions, along with numerical modeling, are used to investigate the evolution of the electron distribution, including energy chirp and total beam energy. In the active beam dump scheme, a laser-driver allows a more homogeneous beam energy extraction and drastically reduces the energy chirp observed in the passive scheme. These concepts could benefit applications requiring overall compactness, such as transportable light sources, or facilities operating atmore » high beam power.« less

  17. Consistent forcing scheme in the cascaded lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Fei, Linlin; Luo, Kai Hong

    2017-11-01

    In this paper, we give an alternative derivation for the cascaded lattice Boltzmann method (CLBM) within a general multiple-relaxation-time (MRT) framework by introducing a shift matrix. When the shift matrix is a unit matrix, the CLBM degrades into an MRT LBM. Based on this, a consistent forcing scheme is developed for the CLBM. The consistency of the nonslip rule, the second-order convergence rate in space, and the property of isotropy for the consistent forcing scheme is demonstrated through numerical simulations of several canonical problems. Several existing forcing schemes previously used in the CLBM are also examined. The study clarifies the relation between MRT LBM and CLBM under a general framework.

  18. A Model-Data Intercomparison of Carbon Fluxes, Pools, and LAI in the Community Land Model (CLM) and Alternative Carbon Allocation Schemes

    NASA Astrophysics Data System (ADS)

    Montane, F.; Fox, A. M.; Arellano, A. F.; Alexander, M. R.; Moore, D. J.

    2016-12-01

    Carbon (C) allocation to different plant tissues (leaves, stem and roots) remains a central challenge for understanding the global C cycle, as it determines C residence time. We used a diverse set of observations (AmeriFlux eddy covariance towers, biomass estimates from tree-ring data, and Leaf Area Index measurements) to compare C fluxes, pools, and Leaf Area Index (LAI) data with the Community Land Model (CLM). We ran CLM for seven temperate forests in North America (including evergreen and deciduous sites) between 1980 and 2013 using different C allocation schemes: i) standard C allocation scheme in CLM, which allocates C to the stem and leaves as a dynamic function of annual net primary productivity (NPP); ii) two fixed C allocation schemes, one representative of evergreen and the other one of deciduous forests, based on Luyssaert et al. 2007; iii) an alternative C allocation scheme, which allocated C to stem and leaves, and to stem and coarse roots, as a dynamic function of annual NPP, based on Litton et al. 2007. At our sites CLM usually overestimated gross primary production and ecosystem respiration, and underestimated net ecosystem exchange. Initial aboveground biomass in 1980 was largely overestimated for deciduous forests, whereas aboveground biomass accumulation between 1980 and 2011 was highly underestimated for both evergreen and deciduous sites due to the lower turnover rate in the sites than the one used in the model. CLM overestimated LAI in both evergreen and deciduous sites because the Leaf C-LAI relationship in the model did not match the observed Leaf C-LAI relationship in our sites. Although the different C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, one of the alternative C allocation schemes used (iii) gave more realistic stem C/leaf C ratios, and highly reduced the overestimation of initial aboveground biomass, and accumulated aboveground NPP for deciduous forests by CLM. Our results would suggest using different C allocation schemes for evergreen and deciduous forests. It is crucial to improve CLM in the near future to minimize data-model mismatches, and to address some of the current model structural errors and parameter uncertainties.

  19. Validation of a selective ensemble-based classification scheme for myoelectric control using a three-dimensional Fitts' Law test.

    PubMed

    Scheme, Erik J; Englehart, Kevin B

    2013-07-01

    When controlling a powered upper limb prosthesis it is important not only to know how to move the device, but also when not to move. A novel approach to pattern recognition control, using a selective multiclass one-versus-one classification scheme has been shown to be capable of rejecting unintended motions. This method was shown to outperform other popular classification schemes when presented with muscle contractions that did not correspond to desired actions. In this work, a 3-D Fitts' Law test is proposed as a suitable alternative to using virtual limb environments for evaluating real-time myoelectric control performance. The test is used to compare the selective approach to a state-of-the-art linear discriminant analysis classification based scheme. The framework is shown to obey Fitts' Law for both control schemes, producing linear regression fittings with high coefficients of determination (R(2) > 0.936). Additional performance metrics focused on quality of control are discussed and incorporated in the evaluation. Using this framework the selective classification based scheme is shown to produce significantly higher efficiency and completion rates, and significantly lower overshoot and stopping distances, with no significant difference in throughput.

  20. Two-stage atlas subset selection in multi-atlas based image segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stagemore » atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.« less

  1. On resilience studies of system detection and recovery techniques against stealthy insider attacks

    NASA Astrophysics Data System (ADS)

    Wei, Sixiao; Zhang, Hanlin; Chen, Genshe; Shen, Dan; Yu, Wei; Pham, Khanh D.; Blasch, Erik P.; Cruz, Jose B.

    2016-05-01

    With the explosive growth of network technologies, insider attacks have become a major concern to business operations that largely rely on computer networks. To better detect insider attacks that marginally manipulate network traffic over time, and to recover the system from attacks, in this paper we implement a temporal-based detection scheme using the sequential hypothesis testing technique. Two hypothetical states are considered: the null hypothesis that the collected information is from benign historical traffic and the alternative hypothesis that the network is under attack. The objective of such a detection scheme is to recognize the change within the shortest time by comparing the two defined hypotheses. In addition, once the attack is detected, a server migration-based system recovery scheme can be triggered to recover the system to the state prior to the attack. To understand mitigation of insider attacks, a multi-functional web display of the detection analysis was developed for real-time analytic. Experiments using real-world traffic traces evaluate the effectiveness of Detection System and Recovery (DeSyAR) scheme. The evaluation data validates the detection scheme based on sequential hypothesis testing and the server migration-based system recovery scheme can perform well in effectively detecting insider attacks and recovering the system under attack.

  2. Considerations and techniques for incorporating remotely sensed imagery into the land resource management process.

    NASA Technical Reports Server (NTRS)

    Brooner, W. G.; Nichols, D. A.

    1972-01-01

    Development of a scheme for utilizing remote sensing technology in an operational program for regional land use planning and land resource management program applications. The scheme utilizes remote sensing imagery as one of several potential inputs to derive desired and necessary data, and considers several alternative approaches to the expansion and/or reduction and analysis of data, using automated data handling techniques. Within this scheme is a five-stage program development which includes: (1) preliminary coordination, (2) interpretation and encoding, (3) creation of data base files, (4) data analysis and generation of desired products, and (5) applications.

  3. Multigrid calculation of three-dimensional turbomachinery flows

    NASA Technical Reports Server (NTRS)

    Caughey, David A.

    1989-01-01

    Research was performed in the general area of computational aerodynamics, with particular emphasis on the development of efficient techniques for the solution of the Euler and Navier-Stokes equations for transonic flows through the complex blade passages associated with turbomachines. In particular, multigrid methods were developed, using both explicit and implicit time-stepping schemes as smoothing algorithms. The specific accomplishments of the research have included: (1) the development of an explicit multigrid method to solve the Euler equations for three-dimensional turbomachinery flows based upon the multigrid implementation of Jameson's explicit Runge-Kutta scheme (Jameson 1983); (2) the development of an implicit multigrid scheme for the three-dimensional Euler equations based upon lower-upper factorization; (3) the development of a multigrid scheme using a diagonalized alternating direction implicit (ADI) algorithm; (4) the extension of the diagonalized ADI multigrid method to solve the Euler equations of inviscid flow for three-dimensional turbomachinery flows; and also (5) the extension of the diagonalized ADI multigrid scheme to solve the Reynolds-averaged Navier-Stokes equations for two-dimensional turbomachinery flows.

  4. Large Eddy simulation of compressible flows with a low-numerical dissipation patch-based adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Pantano, Carlos

    2005-11-01

    We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)

  5. Spectrum efficient distance-adaptive paths for fixed and fixed-alternate routing in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Agrawal, Anuj; Bhatia, Vimal; Prakash, Shashi

    2018-01-01

    Efficient utilization of spectrum is a key concern in the soon to be deployed elastic optical networks (EONs). To perform routing in EONs, various fixed routing (FR), and fixed-alternate routing (FAR) schemes are ubiquitously used. FR, and FAR schemes calculate a fixed route, and a prioritized list of a number of alternate routes, respectively, between different pairs of origin o and target t nodes in the network. The route calculation performed using FR and FAR schemes is predominantly based on either the physical distance, known as k -shortest paths (KSP), or on the hop count (HC). For survivable optical networks, FAR usually calculates link-disjoint (LD) paths. These conventional routing schemes have been efficiently used for decades in communication networks. However, in this paper, it has been demonstrated that these commonly used routing schemes cannot utilize the network spectral resources optimally in the newly introduced EONs. Thus, we propose a new routing scheme for EON, namely, k -distance adaptive paths (KDAP) that efficiently utilizes the benefit of distance-adaptive modulation, and bit rate-adaptive superchannel capability inherited by EON to improve spectrum utilization. In the proposed KDAP, routes are found and prioritized on the basis of bit rate, distance, spectrum granularity, and the number of links used for a particular route. To evaluate the performance of KSP, HC, LD, and the proposed KDAP, simulations have been performed for three different sized networks, namely, 7-node test network (TEST7), NSFNET, and 24-node US backbone network (UBN24). We comprehensively assess the performance of various conventional, and the proposed routing schemes by solving both the RSA and the dual RSA problems under homogeneous and heterogeneous traffic requirements. Simulation results demonstrate that there is a variation amongst the performance of KSP, HC, and LD, depending on the o - t pair, and the network topology and its connectivity. However, the proposed KDAP always performs better for all the considered networks and traffic scenarios, as compared to the conventional routing schemes, namely, KSP, HC, and LD. The proposed KDAP achieves up to 60 % , and 10.46 % improvement in terms of spectrum utilization, and resource utilization ratio, respectively, over the conventional routing schemes.

  6. Athermal laser design.

    PubMed

    Bovington, Jock; Srinivasan, Sudharsanan; Bowers, John E

    2014-08-11

    This paper discusses circuit based and waveguide based athermalization schemes and provides some design examples of athermalized lasers utilizing fully integrated athermal components as an alternative to power hungry thermo-electric controllers (TECs), off-chip wavelength lockers or monitors with lookup tables for tunable lasers. This class of solutions is important for uncooled transmitters on silicon.

  7. Diffusion of Zonal Variables Using Node-Centered Diffusion Solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, T B

    2007-08-06

    Tom Kaiser [1] has done some preliminary work to use the node-centered diffusion solver (originally developed by T. Palmer [2]) in Kull for diffusion of zonal variables such as electron temperature. To avoid numerical diffusion, Tom used a scheme developed by Shestakov et al. [3] and found their scheme could, in the vicinity of steep gradients, decouple nearest-neighbor zonal sub-meshes leading to 'alternating-zone' (red-black mode) errors. Tom extended their scheme to couple the sub-meshes with appropriate chosen artificial diffusion and thereby solved the 'alternating-zone' problem. Because the choice of the artificial diffusion coefficient could be very delicate, it is desirablemore » to use a scheme that does not require the artificial diffusion but still able to avoid both numerical diffusion and the 'alternating-zone' problem. In this document we present such a scheme.« less

  8. Temporal Surface Reconstruction

    DTIC Science & Technology

    1991-05-03

    and the convergence cannot be guaranteed. Maybank [68] investigated alternative incremental schemes for the estimation of feature locations from a...depth from image sequences. International Journal of Computer Vision, 3, 1989. [68] S. J. Maybank . Filter based estimates of depth. In Proceedings of the

  9. Computational and analytical comparison of flux discretizations for the semiconductor device equations beyond Boltzmann statistics

    NASA Astrophysics Data System (ADS)

    Farrell, Patricio; Koprucki, Thomas; Fuhrmann, Jürgen

    2017-10-01

    We compare three thermodynamically consistent numerical fluxes known in the literature, appearing in a Voronoï finite volume discretization of the van Roosbroeck system with general charge carrier statistics. Our discussion includes an extension of the Scharfetter-Gummel scheme to non-Boltzmann (e.g. Fermi-Dirac) statistics. It is based on the analytical solution of a two-point boundary value problem obtained by projecting the continuous differential equation onto the interval between neighboring collocation points. Hence, it serves as a reference flux. The exact solution of the boundary value problem can be approximated by computationally cheaper fluxes which modify certain physical quantities. One alternative scheme averages the nonlinear diffusion (caused by the non-Boltzmann nature of the problem), another one modifies the effective density of states. To study the differences between these three schemes, we analyze the Taylor expansions, derive an error estimate, visualize the flux error and show how the schemes perform for a carefully designed p-i-n benchmark simulation. We present strong evidence that the flux discretization based on averaging the nonlinear diffusion has an edge over the scheme based on modifying the effective density of states.

  10. Simulations of Merging Helion Bunches on the AGS Injection Porch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, C. J.

    During the setup of helions for the FY2014 RHIC run it was discovered that the standard scheme for merging bunches on the AGS injection porch required an injection kicker pulse shorter than what was available. To overcome this difficulty, K. Zeno proposed and developed an interesting and unusual alternative which uses RF harmonic numbers 12, 4, 2 (rather than the standard 8, 4, 2) to merge 8 helion bunches into 2. In this note we carry out simulations that illustrate how the alternative scheme works and how it compares with the standard scheme. This is done in Sections 13 andmore » 14. A scheme in which 6 bunches are merged into 1 is simulated in Section 15. This may be useful if more helions per merged bunch are needed in future runs. General formulae for the simulations are given in Sections 9 through 12. For completeness, Sections 1 through 8 give a derivation of the turn-by-turn equations of longitudinal motion at constant magnetic field. The derivation is based on the work of MacLachlan. The reader may wish to skip over these Sections and start with Section 9.« less

  11. Modeling and Analysis of Energy Conservation Scheme Based on Duty Cycling in Wireless Ad Hoc Sensor Network

    PubMed Central

    Chung, Yun Won; Hwang, Ho Young

    2010-01-01

    In sensor network, energy conservation is one of the most critical issues since sensor nodes should perform a sensing task for a long time (e.g., lasting a few years) but the battery of them cannot be replaced in most practical situations. For this purpose, numerous energy conservation schemes have been proposed and duty cycling scheme is considered the most suitable power conservation technique, where sensor nodes alternate between states having different levels of power consumption. In order to analyze the energy consumption of energy conservation scheme based on duty cycling, it is essential to obtain the probability of each state. In this paper, we analytically derive steady state probability of sensor node states, i.e., sleep, listen, and active states, based on traffic characteristics and timer values, i.e., sleep timer, listen timer, and active timer. The effect of traffic characteristics and timer values on the steady state probability and energy consumption is analyzed in detail. Our work can provide sensor network operators guideline for selecting appropriate timer values for efficient energy conservation. The analytical methodology developed in this paper can be extended to other energy conservation schemes based on duty cycling with different sensor node states, without much difficulty. PMID:22219676

  12. Medical image enhancement using resolution synthesis

    NASA Astrophysics Data System (ADS)

    Wong, Tak-Shing; Bouman, Charles A.; Thibault, Jean-Baptiste; Sauer, Ken D.

    2011-03-01

    We introduce a post-processing approach to improve the quality of CT reconstructed images. The scheme is adapted from the resolution-synthesis (RS)1 interpolation algorithm. In this approach, we consider the input image, scanned at a particular dose level, as a degraded version of a high quality image scanned at a high dose level. Image enhancement is achieved by predicting the high quality image by classification based linear regression. To improve the robustness of our scheme, we also apply the minimum description length principle to determine the optimal number of predictors to use in the scheme, and the ridge regression to regularize the design of the predictors. Experimental results show that our scheme is effective in reducing the noise in images reconstructed from filtered back projection without significant loss of image details. Alternatively, our scheme can also be applied to reduce dose while maintaining image quality at an acceptable level.

  13. A search for space energy alternatives

    NASA Technical Reports Server (NTRS)

    Gilbreath, W. P.; Billman, K. W.

    1978-01-01

    This paper takes a look at a number of schemes for converting radiant energy in space to useful energy for man. These schemes are possible alternatives to the currently most studied solar power satellite concept. Possible primary collection and conversion devices discussed include the space particle flux devices, solar windmills, photovoltaic devices, photochemical cells, photoemissive converters, heat engines, dielectric energy conversion, electrostatic generators, plasma solar collectors, and thermionic schemes. Transmission devices reviewed include lasers and masers.

  14. Fault Diagnosis for Centre Wear Fault of Roll Grinder Based on a Resonance Demodulation Scheme

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Shao, Yimin; Yin, Lei; Yuan, Yilin; Liu, Jing

    2017-05-01

    Roll grinder is one of the important parts in the rolling machinery, and the grinding precision of roll surface has direct influence on the surface quality of steel strip. However, during the grinding process, the centre bears the gravity of the roll and alternating stress. Therefore, wear or spalling faults are easily observed on the centre, which will lead to an anomalous vibration of the roll grinder. In this study, a resonance demodulation scheme is proposed to detect the centre wear fault of roll grinder. Firstly, fast kurtogram method is employed to help select the sub-band filter parameters for optimal resonance demodulation. Further, the envelope spectrum are derived based on the filtered signal. Finally, two health indicators are designed to conduct the fault diagnosis for centre wear fault. The proposed scheme is assessed by analysing experimental data from a roll grinder of twenty-high rolling mill. The results show that the proposed scheme can effectively detect the centre wear fault of the roll grinder.

  15. A CRISPR-based MLST Scheme for Understanding the Population Biology and Epidemiology of Salmonella Enterica

    DTIC Science & Technology

    2015-05-26

    in other systems , or whether it has alternative functions. Here, we report that CRISPR can be used to subtype Salmonella enterica serovariants...protects the bacteria against foreign DNA as described in other systems , or whether it has alternative functions. Here, we report that CRISPR can be...N. Shariat, R. E. Timme, J. B. Pettengill, R. Barrangou, E. G. Dudley. Characterization and evolution of Salmonella CRISPR-Cas systems

  16. Effects of China's New Rural Cooperative Medical Scheme on reducing medical impoverishment in rural Yanbian: An alternative approach.

    PubMed

    Sun, Mei; Shen, Jay J; Li, Chengyue; Cochran, Christopher; Wang, Ying; Chen, Fei; Li, Pingping; Lu, Jun; Chang, Fengshui; Li, Xiaohong; Hao, Mo

    2016-08-22

    This study aimed to measure the poverty head count ratio and poverty gap of rural Yanbian in order to examine whether China's New Rural Cooperative Medical Scheme has alleviated its medical impoverishment and to compare the results of this alternative approach with those of a World Bank approach. This cross-sectional study was based on a stratified random sample survey of 1,987 households and 6,135 individuals conducted in 2008 across eight counties in Yanbian Korean Autonomous Prefecture, Jilin province, China. A new approach was developed to define and identify medical impoverishment. The poverty head count ratio, relative poverty gap, and average poverty gap were used to measure medical impoverishment. Changes in medical impoverishment after the reimbursement under the New Rural Cooperative Medical Scheme were also examined. The government-run New Rural Cooperative Medical Scheme reduced the number of medically impoverished households by 24.6 %, as well as the relative and average gaps by 37.3 % and 38.9 %, respectively. China's New Rural Cooperative Medical Scheme has certain positive but limited effects on alleviating medical impoverishment in rural Yanbian regardless of how medical impoverishment is defined and measured. More governmental and private-sector efforts should therefore be encouraged to further improve the system in terms of financing, operation, and reimbursement policy.

  17. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning

    NASA Astrophysics Data System (ADS)

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-08-01

    {{\\ell }2,1} -minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the {{\\ell }2,1} -based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the {{\\ell }2,1} -minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the {{\\ell }2,1} -minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the {{\\ell }2,1} -minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  18. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning.

    PubMed

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-07-20

    [Formula: see text]-minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the [Formula: see text]-based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the [Formula: see text]-minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the [Formula: see text]-minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the [Formula: see text]-minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  19. Hybrid scheduling mechanisms for Next-generation Passive Optical Networks based on network coding

    NASA Astrophysics Data System (ADS)

    Zhao, Jijun; Bai, Wei; Liu, Xin; Feng, Nan; Maier, Martin

    2014-10-01

    Network coding (NC) integrated into Passive Optical Networks (PONs) is regarded as a promising solution to achieve higher throughput and energy efficiency. To efficiently support multimedia traffic under this new transmission mode, novel NC-based hybrid scheduling mechanisms for Next-generation PONs (NG-PONs) including energy management, time slot management, resource allocation, and Quality-of-Service (QoS) scheduling are proposed in this paper. First, we design an energy-saving scheme that is based on Bidirectional Centric Scheduling (BCS) to reduce the energy consumption of both the Optical Line Terminal (OLT) and Optical Network Units (ONUs). Next, we propose an intra-ONU scheduling and an inter-ONU scheduling scheme, which takes NC into account to support service differentiation and QoS assurance. The presented simulation results show that BCS achieves higher energy efficiency under low traffic loads, clearly outperforming the alternative NC-based Upstream Centric Scheduling (UCS) scheme. Furthermore, BCS is shown to provide better QoS assurance.

  20. An Efficient Variable Length Coding Scheme for an IID Source

    NASA Technical Reports Server (NTRS)

    Cheung, K. -M.

    1995-01-01

    A scheme is examined for using two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. This combined strategy, or alternating runlength Huffman (ARH) coding, was found to be more efficient than ordinary coding in certain circumstances.

  1. Incentivising effort in governance of public hospitals: Development of a delegation-based alternative to activity-based remuneration.

    PubMed

    Søgaard, Rikke; Kristensen, Søren Rud; Bech, Mickael

    2015-08-01

    This paper is a first examination of the development of an alternative to activity-based remuneration in public hospitals, which is currently being tested at nine hospital departments in a Danish region. The objective is to examine the process of delegating the authority of designing new incentive schemes from the principal (the regional government) to the agents (the hospital departments). We adopt a theoretical framework where, when deciding about delegation, the principal should trade off an initiative effect against the potential cost of loss of control. The initiative effect is evaluated by studying the development process and the resulting incentive schemes for each of the departments. Similarly, the potential cost of loss of control is evaluated by assessing the congruence between focus of the new incentive schemes and the principal's objectives. We observe a high impact of the effort incentive in the form of innovative and ambitious selection of projects by the agents, leading to nine very different solutions across departments. However, we also observe some incongruence between the principal's stated objectives and the revealed private interests of the agents. Although this is a baseline study involving high uncertainty about the future, the findings point at some issues with the delegation approach that could lead to inefficient outcomes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Method for digital measurement of phase-frequency characteristics for a fixed-length ultrasonic spectrometer

    NASA Astrophysics Data System (ADS)

    Astashev, M. E.; Belosludtsev, K. N.; Kharakoz, D. P.

    2014-05-01

    One of the most accurate methods for measuring the compressibility of liquids is resonance measurement of sound velocity in a fixed-length interferometer. This method combines high sensitivity, accuracy, and small sample volume of the test liquid. The measuring principle is to study the resonance properties of a composite resonator that contains a test liquid sample. Ealier, the phase-locked loop (PLL) scheme was used for this. In this paper, we propose an alternative measurement scheme based on digital analysis of harmonic signals, describe the implementation of this scheme using commercially available data acquisition modules, and give examples of test measurements with accuracy evaluations of the results.

  3. Network coding multiuser scheme for indoor visible light communications

    NASA Astrophysics Data System (ADS)

    Zhang, Jiankun; Dang, Anhong

    2017-12-01

    Visible light communication (VLC) is a unique alternative for indoor data transfer and developing beyond point-to-point. However, for realizing high-capacity networks, VLC is facing challenges including the constrained bandwidth of the optical access point and random occlusion. A network coding scheme for VLC (NC-VLC) is proposed, with increased throughput and system robustness. Based on the Lambertian illumination model, theoretical decoding failure probability of the multiuser NC-VLC system is derived, and the impact of the system parameters on the performance is analyzed. Experiments demonstrate the proposed scheme successfully in the indoor multiuser scenario. These results indicate that the NC-VLC system shows a good performance under the link loss and random occlusion.

  4. Constructive polarization modulation for coherent population trapping clock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, Peter, E-mail: enxue.yun@obspm.fr; Danet, Jean-Marie; Holleville, David

    2014-12-08

    We propose a constructive polarization modulation scheme for atomic clocks based on coherent population trapping (CPT). In this scheme, the polarization of a bichromatic laser beam is modulated between two opposite circular polarizations to avoid trapping the atomic populations in the extreme Zeeman sublevels. We show that if an appropriate phase modulation between the two optical components of the bichromatic laser is applied synchronously, the two CPT dark states which are produced successively by the alternate polarizations add constructively. Measured CPT resonance contrasts up to 20% in one-pulse CPT and 12% in two-pulse Ramsey-CPT experiments are reported, demonstrating the potentialmore » of this scheme for applications to high performance atomic clocks.« less

  5. Nagy-Soper Subtraction: a Review

    NASA Astrophysics Data System (ADS)

    Robens, Tania

    2013-07-01

    In this review, we present a review on an alternative NLO subtraction scheme, based on the splitting kernels of an improved parton shower that promises to facilitate the inclusion of higher-order corrections into Monte Carlo event generators. We give expressions for the scheme for massless emitters, and point to work on the extension for massive cases. As an example, we show results for the C parameter of the process e+e-→3 jets at NLO which have recently been published as a verification of this scheme. We equally provide analytic expressions for integrated counterterms that have not been presented in previous work, and comment on the possibility of analytic approximations for the remaining numerical integrals.

  6. Minimum Disclosure Counting for the Alternative Vote

    NASA Astrophysics Data System (ADS)

    Wen, Roland; Buckland, Richard

    Although there is a substantial body of work on preventing bribery and coercion of voters in cryptographic election schemes for plurality electoral systems, there are few attempts to construct such schemes for preferential electoral systems. The problem is preferential systems are prone to bribery and coercion via subtle signature attacks during the counting. We introduce a minimum disclosure counting scheme for the alternative vote preferential system. Minimum disclosure provides protection from signature attacks by revealing only the winning candidate.

  7. Four-dimensional MRI using an internal respiratory surrogate derived by dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Uh, Jinsoo; Ayaz Khan, M.; Hua, Chiaho

    2016-11-01

    This study aimed to develop a practical and accurate 4-dimensional (4D) magnetic resonance imaging (MRI) method using a non-navigator, image-based internal respiratory surrogate derived by dimensionality reduction (DR). The use of DR has been previously suggested but not implemented for reconstructing 4D MRI, despite its practical advantages. We compared multiple image-acquisition schemes and refined a retrospective-sorting process to optimally implement a DR-derived surrogate. The comparison included an unconventional scheme that acquires paired slices alternately to mitigate the internal surrogate’s dependency on a specific slice location. We introduced ‘target-oriented sorting’, as opposed to conventional binning, to quantify the coherence in retrospectively sorted images, thereby determining the minimal scan time needed for sufficient coherence. This study focused on evaluating the proposed method using digital phantoms which provided unequivocal gold standard. The evaluation indicated that the DR-based respiratory surrogate is highly accurate: the error in amplitude percentile of the surrogate signal was less than 5% with the optimal scheme. Acquiring alternating paired slices was superior to the conventional scheme of acquiring individual slices; the advantage of the unconventional scheme was more pronounced when a substantial phase shift occurred across slice locations. The analysis of coherence across sorted images confirmed the advantage of higher sampling efficiencies in non-navigator respiratory surrogates. We determined that a scan time of 20 s per imaging slice was sufficient to achieve a mean coherence error of less than 1% for the tested respiratory patterns. The clinical applicability of the proposed 4D MRI has been demonstrated with volunteers and patients. The diaphragm motion in 4D MRI was consistent with that in dynamic 2D imaging which was regarded as the gold standard (difference within 1.8 mm on average).

  8. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    PubMed

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Pricing Models and Payment Schemes for Library Collections.

    ERIC Educational Resources Information Center

    Stern, David

    2002-01-01

    Discusses new pricing and payment options for libraries in light of online products. Topics include alternative cost models rather than traditional subscriptions; use-based pricing; changes in scholarly communication due to information technology; methods to determine appropriate charges for different organizations; consortial plans; funding; and…

  10. Indirect measurement of three-photon correlation in nonclassical light sources

    NASA Astrophysics Data System (ADS)

    Ann, Byoung-moo; Song, Younghoon; Kim, Junki; Yang, Daeho; An, Kyungwon

    2016-06-01

    We observe the three-photon correlation in nonclassical light sources by using an indirect measurement scheme based on the dead-time effect of photon-counting detectors. We first develop a general theory which enables us to extract the three-photon correlation from the two-photon correlation of an arbitrary light source measured with detectors with finite dead times. We then confirm the validity of our measurement scheme in experiments done with a cavity-QED microlaser operating with a large intracavity mean photon number exhibiting both sub- and super-Poissonian photon statistics. The experimental results are in good agreement with the theoretical expectation. Our measurement scheme provides an alternative approach for N -photon correlation measurement employing (N -1 ) detectors and thus a reduced measurement time for a given signal-to-noise ratio, compared to the usual scheme requiring N detectors.

  11. Genetic progress in multistage dairy cattle breeding schemes using genetic markers.

    PubMed

    Schrooten, C; Bovenhuis, H; van Arendonk, J A M; Bijma, P

    2005-04-01

    The aim of this paper was to explore general characteristics of multistage breeding schemes and to evaluate multistage dairy cattle breeding schemes that use information on quantitative trait loci (QTL). Evaluation was either for additional genetic response or for reduction in number of progeny-tested bulls while maintaining the same response. The reduction in response in multistage breeding schemes relative to comparable single-stage breeding schemes (i.e., with the same overall selection intensity and the same amount of information in the final stage of selection) depended on the overall selection intensity, the selection intensity in the various stages of the breeding scheme, and the ratio of the accuracies of selection in the various stages of the breeding scheme. When overall selection intensity was constant, reduction in response increased with increasing selection intensity in the first stage. The decrease in response was highest in schemes with lower overall selection intensity. Reduction in response was limited in schemes with low to average emphasis on first-stage selection, especially if the accuracy of selection in the first stage was relatively high compared with the accuracy in the final stage. Closed nucleus breeding schemes in dairy cattle that use information on QTL were evaluated by deterministic simulation. In the base scheme, the selection index consisted of pedigree information and own performance (dams), or pedigree information and performance of 100 daughters (sires). In alternative breeding schemes, information on a QTL was accounted for by simulating an additional index trait. The fraction of the variance explained by the QTL determined the correlation between the additional index trait and the breeding goal trait. Response in progeny test schemes relative to a base breeding scheme without QTL information ranged from +4.5% (QTL explaining 5% of the additive genetic variance) to +21.2% (QTL explaining 50% of the additive genetic variance). A QTL explaining 5% of the additive genetic variance allowed a 35% reduction in the number of progeny tested bulls, while maintaining genetic response at the level of the base scheme. Genetic progress was up to 31.3% higher for schemes with increased embryo production and selection of embryos based on QTL information. The challenge for breeding organizations is to find the optimum breeding program with regard to additional genetic progress and additional (or reduced) cost.

  12. Environmental assessment of alternative treatment schemes for energy and nutrient recovery from livestock manure.

    PubMed

    Pedizzi, C; Noya, I; Sarli, J; González-García, S; Lema, J M; Moreira, M T; Carballa, M

    2018-04-20

    The application of livestock manure on agricultural land is being restricted due to its significant content of phosphorus (P) and nitrogen (N), leading to eutrophication. At the same time, the growing demand for N and P mineral fertilizers is increasing their production costs and causing the depletion of natural phosphate rock deposits. In the present work, seven technologically feasible treatment schemes for energy (biogas) and nutrient recovery (e.g., struvite precipitation) and/or removal (e.g., partial nitritation/anammox) were evaluated from an environmental perspective. In general, while approaches based solely on energy recovery and use of digestate as fertilizer are commonly limited by community regulations, strategies pursuing the generation of high-quality struvite are not environmentally sound alternatives. In contrast, schemes that include further solid/liquid separation of the digestate improved the environmental profile, and their combination with an additional N-removal stage would lead to the most environmental-friendly framework. However, the preferred scenario was identified to be highly dependent on the particular conditions of each site, integrating environmental, social and economic criteria. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. An Alternative Classification Scheme for Teaching Performance Incentives Using a Factor Analytic Approach.

    ERIC Educational Resources Information Center

    Mertler, Craig A.

    This study attempted to (1) expand the dichotomous classification scheme typically used by educators and researchers to describe teaching incentives and (2) offer administrators and teachers an alternative framework within which to develop incentive systems. Elementary, middle, and high school teachers in Ohio rated 10 commonly instituted teaching…

  14. A Bookmarking Service for Organizing and Sharing URLs

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Wolfe, Shawn R.; Chen, James R.; Mathe, Nathalie; Rabinowitz, Joshua L.

    1997-01-01

    Web browser bookmarking facilities predominate as the method of choice for managing URLs. In this paper, we describe some deficiencies of current bookmarking schemes, and examine an alternative to current approaches. We present WebTagger(TM), an implemented prototype of a personal bookmarking service that provides both individuals and groups with a customizable means of organizing and accessing Web-based information resources. In addition, the service enables users to supply feedback on the utility of these resources relative to their information needs, and provides dynamically-updated ranking of resources based on incremental user feedback. Individuals may access the service from anywhere on the Internet, and require no special software. This service greatly simplifies the process of sharing URLs within groups, in comparison with manual methods involving email. The underlying bookmark organization scheme is more natural and flexible than current hierarchical schemes supported by the major Web browsers, and enables rapid access to stored bookmarks.

  15. Number-phase minimum-uncertainty state with reduced number uncertainty in a Kerr nonlinear interferometer

    NASA Astrophysics Data System (ADS)

    Kitagawa, M.; Yamamoto, Y.

    1987-11-01

    An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.

  16. Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks

    PubMed Central

    Li, Jiayin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal

    2017-01-01

    Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs. PMID:29117152

  17. Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks.

    PubMed

    Zheng, Haifeng; Li, Jiayin; Feng, Xinxin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal

    2017-11-08

    Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs .

  18. Optical frequency comb based multi-band microwave frequency conversion for satellite applications.

    PubMed

    Yang, Xinwu; Xu, Kun; Yin, Jie; Dai, Yitang; Yin, Feifei; Li, Jianqiang; Lu, Hua; Liu, Tao; Ji, Yuefeng

    2014-01-13

    Based on optical frequency combs (OFC), we propose an efficient and flexible multi-band frequency conversion scheme for satellite repeater applications. The underlying principle is to mix dual coherent OFCs with one of which carrying the input signal. By optically channelizing the mixed OFCs, the converted signal in different bands can be obtained in different channels. Alternatively, the scheme can be configured to generate multi-band local oscillators (LO) for widely distribution. Moreover, the scheme realizes simultaneous inter- and intra-band frequency conversion just in a single structure and needs only three frequency-fixed microwave sources. We carry out a proof of concept experiment in which multiple LOs with 2 GHz, 10 GHz, 18 GHz, and 26 GHz are generated. A C-band signal of 6.1 GHz input to the proposed scheme is successfully converted to 4.1 GHz (C band), 3.9 GHz (C band) and 11.9 GHz (X band), etc. Compared with the back-to-back (B2B) case measured at 0 dBm input power, the proposed scheme shows a 9.3% error vector magnitude (EVM) degradation at each output channel. Furthermore, all channels satisfy the EVM limit in a very wide input power range.

  19. Alternative Level of Care: Canada's Hospital Beds, the Evidence and Options

    PubMed Central

    Sutherland, Jason M.; Crump, R. Trafford

    2013-01-01

    Patients designated as alternative level of care (ALC) are an ongoing concern for healthcare policy makers across Canada. These patients occupy valuable hospital beds and limit access to acute care services. The objective of this paper is to present policy alternatives to address underlying factors associated with ALC bed use. Three alternatives, and their respective limitations and structural challenges, are discussed. Potential solutions may require a mix of policy options proposed here. Inadequate policy jeopardizes new acute care activity-based funding schemes in British Columbia and Ontario. Failure to address this issue could exacerbate pressures on the existing bottlenecks in the community care system in these and other provinces. PMID:23968671

  20. A conjugate gradient method for solving the non-LTE line radiation transfer problem

    NASA Astrophysics Data System (ADS)

    Paletou, F.; Anterrieu, E.

    2009-12-01

    This study concerns the fast and accurate solution of the line radiation transfer problem, under non-LTE conditions. We propose and evaluate an alternative iterative scheme to the classical ALI-Jacobi method, and to the more recently proposed Gauss-Seidel and successive over-relaxation (GS/SOR) schemes. Our study is indeed based on applying a preconditioned bi-conjugate gradient method (BiCG-P). Standard tests, in 1D plane parallel geometry and in the frame of the two-level atom model with monochromatic scattering are discussed. Rates of convergence between the previously mentioned iterative schemes are compared, as are their respective timing properties. The smoothing capability of the BiCG-P method is also demonstrated.

  1. DIFFERENTIATING PASSENGER VEHICLES BY FUEL ECONOMY: STRATEGIC INCENTIVES AND THE COST-EFFECTIVENESS OF TRADABLE CAFE STANDARDS

    EPA Science Inventory

    The welfare and distributional effects of alternative fuel economy regulations will be compared, including an increase in existing CAFE standards, allowing for tradable credits, and implementing other design options in a trading scheme, such as sliding standards based on ve...

  2. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  3. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  4. Performance Analysis of a Wind Turbine Driven Swash Plate Pump for Large Scale Offshore Applications

    NASA Astrophysics Data System (ADS)

    Buhagiar, D.; Sant, T.

    2014-12-01

    This paper deals with the performance modelling and analysis of offshore wind turbine-driven hydraulic pumps. The concept consists of an open loop hydraulic system with the rotor main shaft directly coupled to a swash plate pump to supply pressurised sea water. A mathematical model is derived to cater for the steady state behaviour of entire system. A simplified model for the pump is implemented together with different control scheme options for regulating the rotor shaft power. A new control scheme is investigated, based on the combined use of hydraulic pressure and pitch control. Using a steady-state analysis, the study shows how the adoption of alternative control schemes in a the wind turbine-hydraulic pump system may result in higher energy yields than those from a conventional system with an electrical generator and standard pitch control for power regulation. This is in particular the case with the new control scheme investigated in this study that is based on the combined use of pressure and rotor blade pitch control.

  5. Reducing the PAPR in FBMC-OQAM systems with low-latency trellis-based SLM technique

    NASA Astrophysics Data System (ADS)

    Bulusu, S. S. Krishna Chaitanya; Shaiek, Hmaied; Roviras, Daniel

    2016-12-01

    Filter-bank multi-carrier (FBMC) modulations, and more specifically FBMC-offset quadrature amplitude modulation (OQAM), are seen as an interesting alternative to orthogonal frequency division multiplexing (OFDM) for the 5th generation radio access technology. In this paper, we investigate the problem of peak-to-average power ratio (PAPR) reduction for FBMC-OQAM signals. Recently, it has been shown that FBMC-OQAM with trellis-based selected mapping (TSLM) scheme not only is superior to any scheme based on symbol-by-symbol approach but also outperforms that of the OFDM with classical SLM scheme. This paper is an extension of that work, where we analyze the TSLM in terms of computational complexity, required hardware memory, and latency issues. We have proposed an improvement to the TSLM, which requires very less hardware memory, compared to the originally proposed TSLM, and also have low latency. Additionally, the impact of the time duration of partial PAPR on the performance of TSLM is studied, and its lower bound has been identified by proposing a suitable time duration. Also, a thorough and fair comparison of performance has been done with an existing trellis-based scheme proposed in literature. The simulation results show that the proposed low-latency TSLM yields better PAPR reduction performance with relatively less hardware memory requirements.

  6. Simplicial lattices in classical and quantum gravity: Mathematical structure and application

    NASA Astrophysics Data System (ADS)

    Lafave, Norman Joseph

    1989-03-01

    Geometrodynamics can be understood more clearly in the language of geometry than in the language of differential equations. This is the primary motivation for the development of calculational schemes based on Regge Calculus as an alternative to those schemes based on Ricci Calculus. The mathematics of simplicial lattices were developed to the same level of sophistication as the mathematics of pseudo--Riemannian geometry for continuum manifolds. This involves the definition of the simplicial analogues of several concepts from differential topology and differential geometry-the concept of a point, tangent spaces, forms, tensors, parallel transport, covariant derivatives, connections, and curvature. These simplicial analogues are used to define the Einstein tensor and the extrinsic curvature on a simplicial geometry. This mathematical formalism was applied to the solution of several outstanding problems in the development of a Regge Calculus based computational scheme for general geometrodynamic problems. This scheme is based on a 3 + 1 splitting of spacetime within the Regge Calculus prescription known as Null-Strut Calculus (NSC). NSC describes the foliation of spacetime into spacelike hypersurfaces built of tetrahedra. These hypersurfaces are coupled by light rays (null struts) to past and future momentum-like structures, geometrically dual to the tetrahedral lattice of the hypersurface. Avenues of investigation for NSC in quantum gravity are described.

  7. Improved feed protein fractionation schemes for formulating rations with the cornell net carbohydrate and protein system.

    PubMed

    Lanzas, C; Broderick, G A; Fox, D G

    2008-12-01

    Adequate predictions of rumen-degradable protein (RDP) and rumen-undegradable protein (RUP) supplies are necessary to optimize performance while minimizing losses of excess nitrogen (N). The objectives of this study were to evaluate the original Cornell Net Carbohydrate Protein System (CNCPS) protein fractionation scheme and to develop and evaluate alternatives designed to improve its adequacy in predicting RDP and RUP. The CNCPS version 5 fractionates CP into 5 fractions based on solubility in protein precipitant agents, buffers, and detergent solutions: A represents the soluble nonprotein N, B1 is the soluble true protein, B2 represents protein with intermediate rates of degradation, B3 is the CP insoluble in neutral detergent solution but soluble in acid detergent solution, and C is the unavailable N. Model predictions were evaluated with studies that measured N flow data at the omasum. The N fractionation scheme in version 5 of the CNCPS explained 78% of the variation in RDP with a root mean square prediction error (RMSPE) of 275 g/d, and 51% of the RUP variation with RMSPE of 248 g/d. Neutral detergent insoluble CP flows were overpredicted with a mean bias of 128 g/d (40% of the observed mean). The greatest improvements in the accuracy of RDP and RUP predictions were obtained with the following 2 alternative schemes. Alternative 1 used the inhibitory in vitro system to measure the fractional rate of degradation for the insoluble protein fraction in which A = nonprotein N, B1 = true soluble protein, B2 = insoluble protein, C = unavailable protein (RDP: R(2) = 0.84 and RMSPE = 167 g/d; RUP: R(2) = 0.61 and RMSPE = 209 g/d), whereas alternative 2 redefined A and B1 fractions as the non-amino-N and amino-N in the soluble fraction respectively (RDP: R(2) = 0.79 with RMSPE = 195 g/d and RUP: R(2) = 0.54 with RMSPE = 225 g/d). We concluded that implementing alternative 1 or 2 will improve the accuracy of predicting RDP and RUP within the CNCPS framework.

  8. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network

    NASA Astrophysics Data System (ADS)

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-03-01

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.

  9. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network.

    PubMed

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-03-21

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices' non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.

  10. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network

    PubMed Central

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-01-01

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing. PMID:28322262

  11. Neural Networks and other Techniques for Fault Identification and Isolation of Aircraft Systems

    NASA Technical Reports Server (NTRS)

    Innocenti, M.; Napolitano, M.

    2003-01-01

    Fault identification, isolation, and accomodation have become critical issues in the overall performance of advanced aircraft systems. Neural Networks have shown to be a very attractive alternative to classic adaptation methods for identification and control of non-linear dynamic systems. The purpose of this paper is to show the improvements in neural network applications achievable through the use of learning algorithms more efficient than the classic Back-Propagation, and through the implementation of the neural schemes in parallel hardware. The results of the analysis of a scheme for Sensor Failure, Detection, Identification and Accommodation (SFDIA) using experimental flight data of a research aircraft model are presented. Conventional approaches to the problem are based on observers and Kalman Filters while more recent methods are based on neural approximators. The work described in this paper is based on the use of neural networks (NNs) as on-line learning non-linear approximators. The performances of two different neural architectures were compared. The first architecture is based on a Multi Layer Perceptron (MLP) NN trained with the Extended Back Propagation algorithm (EBPA). The second architecture is based on a Radial Basis Function (RBF) NN trained with the Extended-MRAN (EMRAN) algorithms. In addition, alternative methods for communications links fault detection and accomodation are presented, relative to multiple unmanned aircraft applications.

  12. Recent developments in the structural design and optimization of ITER neutral beam manifold

    NASA Astrophysics Data System (ADS)

    Chengzhi, CAO; Yudong, PAN; Zhiwei, XIA; Bo, LI; Tao, JIANG; Wei, LI

    2018-02-01

    This paper describes a new design of the neutral beam manifold based on a more optimized support system. A proposed alternative scheme has presented to replace the former complex manifold supports and internal pipe supports in the final design phase. Both the structural reliability and feasibility were confirmed with detailed analyses. Comparative analyses between two typical types of manifold support scheme were performed. All relevant results of mechanical analyses for typical operation scenarios and fault conditions are presented. Future optimization activities are described, which will give useful information for a refined setting of components in the next phase.

  13. MPDATA: A positive definite solver for geophysical flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smolarkiewicz, P.K.; Margolin, L.G.

    1997-12-31

    This paper is a review of MPDATA, a class of methods for the numerical simulation of advection based on the sign-preserving properties of upstream differencing. MPDATA was designed originally as an inexpensive alternative to flux-limited schemes for evaluating the transport of nonnegative thermodynamic variables (such as liquid water or water vapor) in atmospheric models. During the last decade, MPDATA has evolved from a simple advection scheme to a general approach for integrating the conservation laws of geophysical fluids on micro-to-planetary scales. The purpose of this paper is to summarize the basic concepts leading to a family of MPDATA schemes, reviewmore » the existing MPDATA options, as well as to demonstrate the efficacy of the approach using diverse examples of complex geophysical flows.« less

  14. Classification of extraterrestrial civilizations

    NASA Astrophysics Data System (ADS)

    Tang, Tong B.; Chang, Grace

    1991-06-01

    A scheme of classification of extraterrestrial intelligence (ETI) communities based on the scope of energy accessible to the civilization in question is proposed as an alternative to the Kardeshev (1964) scheme that includes three types of civilization, as determined by their levels of energy expenditure. The proposed scheme includes six classes: (1) a civilization that runs essentially on energy exerted by individual beings or by domesticated lower life forms, (2) harnessing of natural sources on planetary surface with artificial constructions, like water wheels and wind sails, (3) energy from fossils and fissionable isotopes, mined beneath the planet surface, (4) exploitation of nuclear fusion on a large scale, whether on the planet, in space, or from primary solar energy, (5) extensive use of antimatter for energy storage, and (6) energy from spacetime, perhaps via the action of naked singularities.

  15. Quantum Iterative Deepening with an Application to the Halting Problem

    PubMed Central

    Tarrataca, Luís; Wichert, Andreas

    2013-01-01

    Classical models of computation traditionally resort to halting schemes in order to enquire about the state of a computation. In such schemes, a computational process is responsible for signaling an end of a calculation by setting a halt bit, which needs to be systematically checked by an observer. The capacity of quantum computational models to operate on a superposition of states requires an alternative approach. From a quantum perspective, any measurement of an equivalent halt qubit would have the potential to inherently interfere with the computation by provoking a random collapse amongst the states. This issue is exacerbated by undecidable problems such as the Entscheidungsproblem which require universal computational models, e.g. the classical Turing machine, to be able to proceed indefinitely. In this work we present an alternative view of quantum computation based on production system theory in conjunction with Grover's amplitude amplification scheme that allows for (1) a detection of halt states without interfering with the final result of a computation; (2) the possibility of non-terminating computation and (3) an inherent speedup to occur during computations susceptible of parallelization. We discuss how such a strategy can be employed in order to simulate classical Turing machines. PMID:23520465

  16. Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.

    PubMed

    Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn

    2016-04-20

    Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.

  17. Nanopositioning for polarimetric characterization.

    PubMed

    Qureshi, Naser; Kolokoltsev, Oleg V; Ortega-Martínez, Roberto; Ordoñez-Romero, C L

    2008-12-01

    A positioning system with approximately nanometer resolution has been developed based on a new implementation of a motor-driven screw scheme. In contrast to conventional positioning systems based on piezoelectric elements, this system shows remarkably low levels of drift and vibration, and eliminates the need for position feedback during typical data acquisition processes. During positioning or scanning processes, non-repeatability and hysteresis problems inherent in mechanical positioning systems are greatly reduced using a software feedback scheme. As a result, we are able to demonstrate an average mechanical resolution of 1.45 nm and near diffraction-limited imaging using scanning optical microscopy. We propose this approach to nanopositioning as a readily accessible alternative enabling high spatial resolution scanning probe characterization (e.g., polarimetry) and provide practical details for its implementation.

  18. A New Artificial Neural Network Enhanced by the Shuffled Complex Evolution Optimization with Principal Component Analysis (SP-UCI) for Water Resources Management

    NASA Astrophysics Data System (ADS)

    Hayatbini, N.; Faridzad, M.; Yang, T.; Akbari Asanjan, A.; Gao, X.; Sorooshian, S.

    2016-12-01

    The Artificial Neural Networks (ANNs) are useful in many fields, including water resources engineering and management. However, due to the non-linear and chaotic characteristics associated with natural processes and human decision making, the use of ANNs in real-world applications is still limited, and its performance needs to be further improved for a broader practical use. The commonly used Back-Propagation (BP) scheme and gradient-based optimization in training the ANNs have already found to be problematic in some cases. The BP scheme and gradient-based optimization methods are associated with the risk of premature convergence, stuck in local optimums, and the searching is highly dependent on initial conditions. Therefore, as an alternative to BP and gradient-based searching scheme, we propose an effective and efficient global searching method, termed the Shuffled Complex Evolutionary Global optimization algorithm with Principal Component Analysis (SP-UCI), to train the ANN connectivity weights. Large number of real-world datasets are tested with the SP-UCI-based ANN, as well as various popular Evolutionary Algorithms (EAs)-enhanced ANNs, i.e., Particle Swarm Optimization (PSO)-, Genetic Algorithm (GA)-, Simulated Annealing (SA)-, and Differential Evolution (DE)-enhanced ANNs. Results show that SP-UCI-enhanced ANN is generally superior over other EA-enhanced ANNs with regard to the convergence and computational performance. In addition, we carried out a case study for hydropower scheduling in the Trinity Lake in the western U.S. In this case study, multiple climate indices are used as predictors for the SP-UCI-enhanced ANN. The reservoir inflows and hydropower releases are predicted up to sub-seasonal to seasonal scale. Results show that SP-UCI-enhanced ANN is able to achieve better statistics than other EAs-based ANN, which implies the usefulness and powerfulness of proposed SP-UCI-enhanced ANN for reservoir operation, water resources engineering and management. The SP-UCI-enhanced ANN is universally applicable to many other regression and prediction problems, and it has a good potential to be an alternative to the classical BP scheme and gradient-based optimization methods.

  19. A dispersion minimizing scheme for the 3-D Helmholtz equation based on ray theory

    NASA Astrophysics Data System (ADS)

    Stolk, Christiaan C.

    2016-06-01

    We develop a new dispersion minimizing compact finite difference scheme for the Helmholtz equation in 2 and 3 dimensions. The scheme is based on a newly developed ray theory for difference equations. A discrete Helmholtz operator and a discrete operator to be applied to the source and the wavefields are constructed. Their coefficients are piecewise polynomial functions of hk, chosen such that phase and amplitude errors are minimal. The phase errors of the scheme are very small, approximately as small as those of the 2-D quasi-stabilized FEM method and substantially smaller than those of alternatives in 3-D, assuming the same number of gridpoints per wavelength is used. In numerical experiments, accurate solutions are obtained in constant and smoothly varying media using meshes with only five to six points per wavelength and wave propagation over hundreds of wavelengths. When used as a coarse level discretization in a multigrid method the scheme can even be used with down to three points per wavelength. Tests on 3-D examples with up to 108 degrees of freedom show that with a recently developed hybrid solver, the use of coarser meshes can lead to corresponding savings in computation time, resulting in good simulation times compared to the literature.

  20. Control scheme for power modulation of a free piston Stirling engine

    DOEpatents

    Dhar, Manmohan

    1989-01-01

    The present invention relates to a control scheme for power modulation of a free-piston Stirling engine-linear alternator power generator system. The present invention includes connecting an autotransformer in series with a tuning capacitance between a linear alternator and a utility grid to maintain a constant displacement to piston stroke ratio and their relative phase angle over a wide range of operating conditions.

  1. On epicardial potential reconstruction using regularization schemes with the L1-norm data term.

    PubMed

    Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart

    2011-01-07

    The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.

  2. Management initiatives in a community-based health insurance scheme.

    PubMed

    Sinha, Tara; Ranson, M Kent; Chatterjee, Mirai; Mills, Anne

    2007-01-01

    Community-based health insurance (CBHI) schemes have developed in response to inadequacies of alternate systems for protecting the poor against health care expenditures. Some of these schemes have arisen within community-based organizations (CBOs), which have strong links with poor communities, and are therefore well situated to offer CBHI. However, the managerial capacities of many such CBOs are limited. This paper describes management initiatives undertaken in a CBHI scheme in India, in the course of an action-research project. The existing structures and systems at the CBHI had several strengths, but fell short on some counts, which became apparent in the course of planning for two interventions under the research project. Management initiatives were introduced that addressed four features of the CBHI, viz. human resources, organizational structure, implementation systems, and data management. Trained personnel were hired and given clear roles and responsibilities. Lines of reporting and accountability were spelt out, and supportive supervision was provided to team members. The data resources of the organization were strengthened for greater utilization of this information. While the changes that were introduced took some time to be accepted by team members, the commitment of the CBHI's leadership to these initiatives was critical to their success. Copyright (c) 2007 John Wiley & Sons, Ltd.

  3. A risk explicit interval linear programming model for uncertainty-based environmental economic optimization in the Lake Fuxian watershed, China.

    PubMed

    Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.

  4. A Risk Explicit Interval Linear Programming Model for Uncertainty-Based Environmental Economic Optimization in the Lake Fuxian Watershed, China

    PubMed Central

    Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144

  5. Alternative Path Communication in Wide-Scale Cluster-Tree Wireless Sensor Networks Using Inactive Periods

    PubMed Central

    Leão, Erico; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco

    2017-01-01

    The IEEE 802.15.4/ZigBee cluster-tree topology is a suitable technology to deploy wide-scale Wireless Sensor Networks (WSNs). These networks are usually designed to support convergecast traffic, where all communication paths go through the PAN (Personal Area Network) coordinator. Nevertheless, peer-to-peer communication relationships may be also required for different types of WSN applications. That is the typical case of sensor and actuator networks, where local control loops must be closed using a reduced number of communication hops. The use of communication schemes optimised just for the support of convergecast traffic may result in higher network congestion and in a potentially higher number of communication hops. Within this context, this paper proposes an Alternative-Route Definition (ARounD) communication scheme for WSNs. The underlying idea of ARounD is to setup alternative communication paths between specific source and destination nodes, avoiding congested cluster-tree paths. These alternative paths consider shorter inter-cluster paths, using a set of intermediate nodes to relay messages during their inactive periods in the cluster-tree network. Simulation results show that the ARounD communication scheme can significantly decrease the end-to-end communication delay, when compared to the use of standard cluster-tree communication schemes. Moreover, the ARounD communication scheme is able to reduce the network congestion around the PAN coordinator, enabling the reduction of the number of message drops due to queue overflows in the cluster-tree network. PMID:28481245

  6. Alternative Path Communication in Wide-Scale Cluster-Tree Wireless Sensor Networks Using Inactive Periods.

    PubMed

    Leão, Erico; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco

    2017-05-06

    The IEEE 802.15.4/ZigBee cluster-tree topology is a suitable technology to deploy wide-scale Wireless Sensor Networks (WSNs). These networks are usually designed to support convergecast traffic, where all communication paths go through the PAN (Personal Area Network) coordinator. Nevertheless, peer-to-peer communication relationships may be also required for different types of WSN applications. That is the typical case of sensor and actuator networks, where local control loops must be closed using a reduced number of communication hops. The use of communication schemes optimised just for the support of convergecast traffic may result in higher network congestion and in a potentially higher number of communication hops. Within this context, this paper proposes an Alternative-Route Definition (ARounD) communication scheme for WSNs. The underlying idea of ARounD is to setup alternative communication paths between specific source and destination nodes, avoiding congested cluster-tree paths. These alternative paths consider shorter inter-cluster paths, using a set of intermediate nodes to relay messages during their inactive periods in the cluster-tree network. Simulation results show that the ARounD communication scheme can significantly decrease the end-to-end communication delay, when compared to the use of standard cluster-tree communication schemes. Moreover, the ARounD communication scheme is able to reduce the network congestion around the PAN coordinator, enabling the reduction of the number of message drops due to queue overflows in the cluster-tree network.

  7. An extended basis inexact shift-invert Lanczos for the efficient solution of large-scale generalized eigenproblems

    NASA Astrophysics Data System (ADS)

    Rewieński, M.; Lamecki, A.; Mrozowski, M.

    2013-09-01

    This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.

  8. Fault-tolerant quantum computation with nondeterministic entangling gates

    NASA Astrophysics Data System (ADS)

    Auger, James M.; Anwar, Hussain; Gimeno-Segovia, Mercedes; Stace, Thomas M.; Browne, Dan E.

    2018-03-01

    Performing entangling gates between physical qubits is necessary for building a large-scale universal quantum computer, but in some physical implementations—for example, those that are based on linear optics or networks of ion traps—entangling gates can only be implemented probabilistically. In this work, we study the fault-tolerant performance of a topological cluster state scheme with local nondeterministic entanglement generation, where failed entangling gates (which correspond to bonds on the lattice representation of the cluster state) lead to a defective three-dimensional lattice with missing bonds. We present two approaches for dealing with missing bonds; the first is a nonadaptive scheme that requires no additional quantum processing, and the second is an adaptive scheme in which qubits can be measured in an alternative basis to effectively remove them from the lattice, hence eliminating their damaging effect and leading to better threshold performance. We find that a fault-tolerance threshold can still be observed with a bond-loss rate of 6.5% for the nonadaptive scheme, and a bond-loss rate as high as 14.5% for the adaptive scheme.

  9. A Novel Protective Framework for Defeating HTTP-Based Denial of Service and Distributed Denial of Service Attacks.

    PubMed

    Saleh, Mohammed A; Abdul Manaf, Azizah

    2015-01-01

    The growth of web technology has brought convenience to our life, since it has become the most important communication channel. However, now this merit is threatened by complicated network-based attacks, such as denial of service (DoS) and distributed denial of service (DDoS) attacks. Despite many researchers' efforts, no optimal solution that addresses all sorts of HTTP DoS/DDoS attacks is on offer. Therefore, this research aims to fix this gap by designing an alternative solution called a flexible, collaborative, multilayer, DDoS prevention framework (FCMDPF). The innovative design of the FCMDPF framework handles all aspects of HTTP-based DoS/DDoS attacks through the following three subsequent framework's schemes (layers). Firstly, an outer blocking (OB) scheme blocks attacking IP source if it is listed on the black list table. Secondly, the service traceback oriented architecture (STBOA) scheme is to validate whether the incoming request is launched by a human or by an automated tool. Then, it traces back the true attacking IP source. Thirdly, the flexible advanced entropy based (FAEB) scheme is to eliminate high rate DDoS (HR-DDoS) and flash crowd (FC) attacks. Compared to the previous researches, our framework's design provides an efficient protection for web applications against all sorts of DoS/DDoS attacks.

  10. A Novel Protective Framework for Defeating HTTP-Based Denial of Service and Distributed Denial of Service Attacks

    PubMed Central

    Saleh, Mohammed A.; Abdul Manaf, Azizah

    2015-01-01

    The growth of web technology has brought convenience to our life, since it has become the most important communication channel. However, now this merit is threatened by complicated network-based attacks, such as denial of service (DoS) and distributed denial of service (DDoS) attacks. Despite many researchers' efforts, no optimal solution that addresses all sorts of HTTP DoS/DDoS attacks is on offer. Therefore, this research aims to fix this gap by designing an alternative solution called a flexible, collaborative, multilayer, DDoS prevention framework (FCMDPF). The innovative design of the FCMDPF framework handles all aspects of HTTP-based DoS/DDoS attacks through the following three subsequent framework's schemes (layers). Firstly, an outer blocking (OB) scheme blocks attacking IP source if it is listed on the black list table. Secondly, the service traceback oriented architecture (STBOA) scheme is to validate whether the incoming request is launched by a human or by an automated tool. Then, it traces back the true attacking IP source. Thirdly, the flexible advanced entropy based (FAEB) scheme is to eliminate high rate DDoS (HR-DDoS) and flash crowd (FC) attacks. Compared to the previous researches, our framework's design provides an efficient protection for web applications against all sorts of DoS/DDoS attacks. PMID:26065015

  11. Picture-Tube Imperialism? The Impact of U. S. Television on Latin America.

    ERIC Educational Resources Information Center

    Wells, Alan

    Current theories of national economic development are reviewed and found unsatisfactory, and an alternative scheme is presented, based on the concepts of consumerism and producerism, as well as the realization that development is not a unitary phenomenon, but proceeds at different rates in different sectors of the economy and different parts of a…

  12. Moderation of Peer Assessment in Group Projects

    ERIC Educational Resources Information Center

    Bushell, Graeme

    2006-01-01

    It is shown here that a grade distribution scheme commonly used to moderate peer assessments where self assessment is excluded is based on a false premise and will give an erroneous ranking in the situation where the best performer in a student group ranks the second best performer much higher than the other group members. An alternative to…

  13. What Is the Reference? An Examination of Alternatives to the Reference Sources Used in IES TM-30-15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royer, Michael P.

    A study was undertaken to document the role of the reference illuminant in the IES TM-30-15 method for evaluating color rendition. TM-30-15 relies on a relative reference scheme; that is, the reference illuminant and test source always have the same correlated color temperature (CCT). The reference illuminant is a Planckian radiator, model of daylight, or combination of those two, depending on the exact CCT of the test source. Three alternative reference schemes were considered: 1) either using all Planckian radiators or all daylight models; 2) using only one of ten possible illuminants (Planckian, daylight, or equal energy), regardless of themore » CCT of the test source; 3) using an off-Planckian reference illuminant (i.e., a source with a negative Duv). No reference scheme is inherently superior to another, with differences in metric values largely a result of small differences in gamut shape of the reference alternatives. While using any of the alternative schemes is more reasonable in the TM-30-15 evaluation framework than it was with the CIE CRI framework, the differences still ultimately manifest only as changes in interpretation of the results. References are employed in color rendering measures to provide a familiar point of comparison, not to establish an ideal source.« less

  14. A novel high-speed CMOS circuit based on a gang of capacitors

    NASA Astrophysics Data System (ADS)

    Sharroush, Sherif M.

    2017-08-01

    There is no doubt that complementary metal-oxide semiconductor (CMOS) circuits with wide fan-in suffers from the relatively sluggish operation. In this paper, a circuit that contains a gang of capacitors sharing their charge with each other is proposed as an alternative to long N-channel MOS and P-channel MOS stacks. The proposed scheme is investigated quantitatively and verified by simulation using the 45-nm CMOS technology with VDD = 1 V. The time delay, area and power consumption of the proposed scheme are investigated and compared with the conventional static CMOS logic circuit. It is verified that the proposed scheme achieves 52% saving in the average propagation delay for eight inputs and that it has a smaller area compared to the conventional CMOS logic when the number of inputs exceeds three and a smaller power consumption for a number of inputs exceeding two. The impacts of process variations, component mismatches and technology scaling on the proposed scheme are also investigated.

  15. An Implicit LU/AF FDTD Method

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Briley, W. Roger

    2001-01-01

    There has been some recent work to develop two and three-dimensional alternating direction implicit (ADI) FDTD schemes. These ADI schemes are based upon the original ADI concept developed by Peaceman and Rachford and Douglas and Gunn, which is a popular solution method in Computational Fluid Dynamics (CFD). These ADI schemes work well and they require solution of a tridiagonal system of equations. A new approach proposed in this paper applies a LU/AF approximate factorization technique from CFD to Maxwell s equations in flux conservative form for one space dimension. The result is a scheme that will retain its unconditional stability in three space dimensions, but does not require the solution of tridiagonal systems. The theory for this new algorithm is outlined in a one-dimensional context for clarity. An extension to two and threedimensional cases is discussed. Results of Fourier analysis are discussed for both stability and dispersion/damping properties of the algorithm. Results are presented for a one-dimensional model problem, and the explicit FDTD algorithm is chosen as a convenient reference for comparison.

  16. Asynchronous Gossip for Averaging and Spectral Ranking

    NASA Astrophysics Data System (ADS)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  17. Advanced control design for hybrid turboelectric vehicle

    NASA Technical Reports Server (NTRS)

    Abban, Joseph; Norvell, Johnesta; Momoh, James A.

    1995-01-01

    The new environment standards are a challenge and opportunity for industry and government who manufacture and operate urban mass transient vehicles. A research investigation to provide control scheme for efficient power management of the vehicle is in progress. Different design requirements using functional analysis and trade studies of alternate power sources and controls have been performed. The design issues include portability, weight and emission/fuel efficiency of induction motor, permanent magnet and battery. A strategic design scheme to manage power requirements using advanced control systems is presented. It exploits fuzzy logic, technology and rule based decision support scheme. The benefits of our study will enhance the economic and technical feasibility of technological needs to provide low emission/fuel efficient urban mass transit bus. The design team includes undergraduate researchers in our department. Sample results using NASA HTEV simulation tool are presented.

  18. ULTRA-SHARP nonoscillatory convection schemes for high-speed steady multidimensional flow

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Mokhtari, Simin

    1990-01-01

    For convection-dominated flows, classical second-order methods are notoriously oscillatory and often unstable. For this reason, many computational fluid dynamicists have adopted various forms of (inherently stable) first-order upwinding over the past few decades. Although it is now well known that first-order convection schemes suffer from serious inaccuracies attributable to artificial viscosity or numerical diffusion under high convection conditions, these methods continue to enjoy widespread popularity for numerical heat transfer calculations, apparently due to a perceived lack of viable high accuracy alternatives. But alternatives are available. For example, nonoscillatory methods used in gasdynamics, including currently popular TVD schemes, can be easily adapted to multidimensional incompressible flow and convective transport. This, in itself, would be a major advance for numerical convective heat transfer, for example. But, as is shown, second-order TVD schemes form only a small, overly restrictive, subclass of a much more universal, and extremely simple, nonoscillatory flux-limiting strategy which can be applied to convection schemes of arbitrarily high order accuracy, while requiring only a simple tridiagonal ADI line-solver, as used in the majority of general purpose iterative codes for incompressible flow and numerical heat transfer. The new universal limiter and associated solution procedures form the so-called ULTRA-SHARP alternative for high resolution nonoscillatory multidimensional steady state high speed convective modelling.

  19. A back-fitting algorithm to improve real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaojing; Liu, Pan; Cheng, Lei; Liu, Zhangjun; Zhao, Yan

    2018-07-01

    Real-time flood forecasting is important for decision-making with regards to flood control and disaster reduction. The conventional approach involves a postprocessor calibration strategy that first calibrates the hydrological model and then estimates errors. This procedure can simulate streamflow consistent with observations, but obtained parameters are not optimal. Joint calibration strategies address this issue by refining hydrological model parameters jointly with the autoregressive (AR) model. In this study, five alternative schemes are used to forecast floods. Scheme I uses only the hydrological model, while scheme II includes an AR model for error correction. In scheme III, differencing is used to remove non-stationarity in the error series. A joint inference strategy employed in scheme IV calibrates the hydrological and AR models simultaneously. The back-fitting algorithm, a basic approach for training an additive model, is adopted in scheme V to alternately recalibrate hydrological and AR model parameters. The performance of the five schemes is compared with a case study of 15 recorded flood events from China's Baiyunshan reservoir basin. Our results show that (1) schemes IV and V outperform scheme III during the calibration and validation periods and (2) scheme V is inferior to scheme IV in the calibration period, but provides better results in the validation period. Joint calibration strategies can therefore improve the accuracy of flood forecasting. Additionally, the back-fitting recalibration strategy produces weaker overcorrection and a more robust performance compared with the joint inference strategy.

  20. Blind compressive sensing dynamic MRI

    PubMed Central

    Lingala, Sajan Goud; Jacob, Mathews

    2013-01-01

    We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding. Our phase transition experiments demonstrate that the BCS scheme provides much better recovery rates than classical Fourier-based CS schemes, while being only marginally worse than the dictionary aware setting. Since the overhead in additionally estimating the dictionary is low, this method can be very useful in dynamic MRI applications, where the signal is not sparse in known dictionaries. We demonstrate the utility of the BCS scheme in accelerating contrast enhanced dynamic data. We observe superior reconstruction performance with the BCS scheme in comparison to existing low rank and compressed sensing schemes. PMID:23542951

  1. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  2. An acoustic-convective splitting-based approach for the Kapila two-phase flow model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eikelder, M.F.P. ten, E-mail: m.f.p.teneikelder@tudelft.nl; Eindhoven University of Technology, Department of Mathematics and Computer Science, P.O. Box 513, 5600 MB Eindhoven; Daude, F.

    In this paper we propose a new acoustic-convective splitting-based numerical scheme for the Kapila five-equation two-phase flow model. The splitting operator decouples the acoustic waves and convective waves. The resulting two submodels are alternately numerically solved to approximate the solution of the entire model. The Lagrangian form of the acoustic submodel is numerically solved using an HLLC-type Riemann solver whereas the convective part is approximated with an upwind scheme. The result is a simple method which allows for a general equation of state. Numerical computations are performed for standard two-phase shock tube problems. A comparison is made with a non-splittingmore » approach. The results are in good agreement with reference results and exact solutions.« less

  3. Unstructured grids for sonic-boom analysis

    NASA Technical Reports Server (NTRS)

    Fouladi, Kamran

    1993-01-01

    A fast and efficient unstructured grid scheme is evaluated for sonic-boom applications. The scheme is used to predict the near-field pressure signatures of a body of revolution at several body lengths below the configuration, and those results are compared with experimental data. The introduction of the 'sonic-boom grid topology' to this scheme make it well suited for sonic-boom applications, thus providing an alternative to conventional multiblock structured grid schemes.

  4. Error-correcting pairs for a public-key cryptosystem

    NASA Astrophysics Data System (ADS)

    Pellikaan, Ruud; Márquez-Corbella, Irene

    2017-06-01

    Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t-bounded decoding algorithms which is achieved in the case the code has a t-error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t-ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t-error correcting pair.

  5. 76 FR 35181 - Wireless Backhaul; Further Inquiry Into Fixed Service Sharing of the 6875-7125 and 12700-13200...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-16

    ... continuation of electronic newsgathering operations, and the appropriate channelization scheme, coordination... also sought comment on alternative channelization schemes. Several commenters, including FWCC and...

  6. Deformation of angle profiles in forward kinematics for nullifying end-point offset while preserving movement properties.

    PubMed

    Zhang, Xudong

    2002-10-01

    This work describes a new approach that allows an angle-domain human movement model to generate, via forward kinematics, Cartesian-space human movement representation with otherwise inevitable end-point offset nullified but much of the kinematic authenticity retained. The approach incorporates a rectification procedure that determines the minimum postural angle change at the final frame to correct the end-point offset, and a deformation procedure that deforms the angle profile accordingly to preserve maximum original kinematic authenticity. Two alternative deformation schemes, named amplitude-proportional (AP) and time-proportional (TP) schemes, are proposed and formulated. As an illustration and empirical evaluation, the proposed approach, along with two deformation schemes, was applied to a set of target-directed right-hand reaching movements that had been previously measured and modeled. The evaluation showed that both deformation schemes nullified the final frame end-point offset and significantly reduced time-averaged position errors for the end-point as well as the most distal intermediate joint while causing essentially no change in the remaining joints. A comparison between the two schemes based on time-averaged joint and end-point position errors indicated that overall the TP scheme outperformed the AP scheme. In addition, no statistically significant difference in time-averaged angle error was identified between the raw prediction and either of the deformation schemes, nor between the two schemes themselves, suggesting minimal angle-domain distortion incurred by the deformation.

  7. Barriers and facilitators to implementation, uptake and sustainability of community-based health insurance schemes in low- and middle-income countries: a systematic review.

    PubMed

    Fadlallah, Racha; El-Jardali, Fadi; Hemadi, Nour; Morsi, Rami Z; Abou Samra, Clara Abou; Ahmad, Ali; Arif, Khurram; Hishi, Lama; Honein-AbouHaidar, Gladys; Akl, Elie A

    2018-01-29

    Community-based health insurance (CBHI) has evolved as an alternative health financing mechanism to out of pocket payments in low- and middle-income countries (LMICs), particularly in areas where government or employer-based health insurance is minimal. This systematic review aimed to assess the barriers and facilitators to implementation, uptake and sustainability of CHBI schemes in LMICs. We searched six electronic databases and grey literature. We included both quantitative and qualitative studies written in English language and published after year 1992. Two reviewers worked in duplicate and independently to complete study selection, data abstraction, and assessment of methodological features. We synthesized the findings based on thematic analysis and categorized according to the ecological model into individual, interpersonal, community and systems levels. Of 15,510 citations, 51 met the eligibility criteria. Individual factors included awareness and understanding of the concept of CBHI, trust in scheme and scheme managers, perceived service quality, and demographic characteristics, which influenced enrollment and sustainability. Interpersonal factors such as household dynamics, other family members enrolled in the scheme, and social solidarity influenced enrollment and renewal of membership. Community-level factors such as culture and community involvement in scheme development influenced enrollment and sustainability of scheme. Systems-level factors encompassed governance, financial and delivery arrangement. Government involvement, accountability of scheme management, and strong policymaker-implementer relation facilitated implementation and sustainability of scheme. Packages that covered outpatient and inpatient care and those tailored to community needs contributed to increased enrollment. Amount and timing of premium collection was reported to negatively influence enrollment while factors reported as threats to sustainability included facility bankruptcy, operating on small budgets, rising healthcare costs, small risk pool, irregular contributions, and overutilization of services. At the delivery level, accessibility of facilities, facility environment, and health personnel influenced enrollment, service utilization and dropout rates. There are a multitude of interrelated factors at the individual, interpersonal, community and systems levels that drive the implementation, uptake and sustainability of CBHI schemes. We discuss the implications of the findings at the policy and research level. The review protocol is registered in PROSPERO International prospective register of systematic reviews (ID =  CRD42015019812 ).

  8. Force analysis of magnetic bearings with power-saving controls

    NASA Technical Reports Server (NTRS)

    Johnson, Dexter; Brown, Gerald V.; Inman, Daniel J.

    1992-01-01

    Most magnetic bearing control schemes use a bias current with a superimposed control current to linearize the relationship between the control current and the force it delivers. For most operating conditions, the existence of the bias current requires more power than alternative methods that do not use conventional bias. Two such methods are examined which diminish or eliminate bias current. In the typical bias control scheme it is found that for a harmonic control force command into a voltage limited transconductance amplifier, the desired force output is obtained only up to certain combinations of force amplitude and frequency. Above these values, the force amplitude is reduced and a phase lag occurs. The power saving alternative control schemes typically exhibit such deficiencies at even lower command frequencies and amplitudes. To assess the severity of these effects, a time history analysis of the force output is performed for the bias method and the alternative methods. Results of the analysis show that the alternative approaches may be viable. The various control methods examined were mathematically modeled using nondimensionalized variables to facilitate comparison of the various methods.

  9. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras.

    PubMed

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-08-30

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.

  10. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras

    PubMed Central

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-01-01

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748

  11. [The establishment, development and application of classification approach of freshwater phytoplankton based on the functional group: a review].

    PubMed

    Yang, Wen; Zhu, Jin-Yong; Lu, Kai-Hong; Wan, Li; Mao, Xiao-Hua

    2014-06-01

    Appropriate schemes for classification of freshwater phytoplankton are prerequisites and important tools for revealing phytoplanktonic succession and studying freshwater ecosystems. An alternative approach, functional group of freshwater phytoplankton, has been proposed and developed due to the deficiencies of Linnaean and molecular identification in ecological applications. The functional group of phytoplankton is a classification scheme based on autoecology. In this study, the theoretical basis and classification criterion of functional group (FG), morpho-functional group (MFG) and morphology-based functional group (MBFG) were summarized, as well as their merits and demerits. FG was considered as the optimal classification approach for the aquatic ecology research and aquatic environment evaluation. The application status of FG was introduced, with the evaluation standards and problems of two approaches to assess water quality on the basis of FG, index methods of Q and QR, being briefly discussed.

  12. Comparison of compression efficiency between HEVC/H.265 and VP9 based on subjective assessments

    NASA Astrophysics Data System (ADS)

    Řeřábek, Martin; Ebrahimi, Touradj

    2014-09-01

    Current increasing effort of broadcast providers to transmit UHD (Ultra High Definition) content is likely to increase demand for ultra high definition televisions (UHDTVs). To compress UHDTV content, several alternative encoding mechanisms exist. In addition to internationally recognized standards, open access proprietary options, such as VP9 video encoding scheme, have recently appeared and are gaining popularity. One of the main goals of these encoders is to efficiently compress video sequences beyond HDTV resolution for various scenarios, such as broadcasting or internet streaming. In this paper, a broadcast scenario rate-distortion performance analysis and mutual comparison of one of the latest video coding standards H.265/HEVC with recently released proprietary video coding scheme VP9 is presented. Also, currently one of the most popular and widely spread encoder H.264/AVC has been included into the evaluation to serve as a comparison baseline. The comparison is performed by means of subjective evaluations showing actual differences between encoding algorithms in terms of perceived quality. The results indicate a general dominance of HEVC based encoding algorithm in comparison to other alternatives, while VP9 and AVC showing similar performance.

  13. Alternate fusion fuels workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1981-06-01

    The workshop was organized to focus on a specific confinement scheme: the tokamak. The workshop was divided into two parts: systems and physics. The topics discussed in the systems session were narrowly focused on systems and engineering considerations in the tokamak geometry. The workshop participants reviewed the status of system studies, trade-offs between d-t and d-d based reactors and engineering problems associated with the design of a high-temperature, high-field reactor utilizing advanced fuels. In the physics session issues were discussed dealing with high-beta stability, synchrotron losses and transport in alternate fuel systems. The agenda for the workshop is attached.

  14. Tunable Soft X-Ray Oscillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurtele, Jonathan; Gandhi, Punut; Gu, X-W

    A concept for a tunable soft x-ray free electron laser (FEL) photon source is presented and studied numerically. The concept is based on echo-enabled harmonic generation (EEHG), wherein two modulator-chicane sections impose high harmonic structure with much greater efficacy as compared to conventional high harmonic FELs that use only one modulator-chicane section. The idea proposed here is to replace the external laser power sources in the EEHG modulators with FEL oscillators, and to combine the bunching of the beam with the production of radiation. Tunability is accomplished by adjusting the magnetic chicanes while the two oscillators remain at a fixedmore » frequency. This scheme eliminates the need to develop coherent sources with the requisite power, pulse length, and stability requirements by exploiting the MHz bunch repetition rates of FEL continuous wave (CW) sources driven by superconducting (SC) linacs. We present time-dependent GINGER simulation results for an EEHG scheme with an oscillator modulator at 43 nm employing 50percent reflective dielectric mirrors and a second modulator employing an external, 215-nm drive laser. Peak output of order 300 MW is obtained at 2.7 nm, corresponding to the 80th harmonic of 215 nm. An alternative single-cavity echo-oscillator scheme based on a 13.4 nm oscillator is investigated with time-independent simulations that a 180-MW peak power at final wavelength of 1.12 nm. Three alternate configurations that use separate bunches to produce the radiation for EEHG microbunching are also presented. Our results show that oscillator-based soft x-ray FELs driven by CWSC linacs are extremely attractive because of their potential to produce tunable radiation at high average power together with excellent longitudinal coherence and narrow spectral bandwidth.« less

  15. Introduction of the Floquet-Magnus expansion in solid-state nuclear magnetic resonance spectroscopy.

    PubMed

    Mananga, Eugène S; Charpentier, Thibault

    2011-07-28

    In this article, we present an alternative expansion scheme called Floquet-Magnus expansion (FME) used to solve a time-dependent linear differential equation which is a central problem in quantum physics in general and solid-state nuclear magnetic resonance (NMR) in particular. The commonly used methods to treat theoretical problems in solid-state NMR are the average Hamiltonian theory (AHT) and the Floquet theory (FT), which have been successful for designing sophisticated pulse sequences and understanding of different experiments. To the best of our knowledge, this is the first report of the FME scheme in the context of solid state NMR and we compare this approach with other series expansions. We present a modified FME scheme highlighting the importance of the (time-periodic) boundary conditions. This modified scheme greatly simplifies the calculation of higher order terms and shown to be equivalent to the Floquet theory (single or multimode time-dependence) but allows one to derive the effective Hamiltonian in the Hilbert space. Basic applications of the FME scheme are described and compared to previous treatments based on AHT, FT, and static perturbation theory. We discuss also the convergence aspects of the three schemes (AHT, FT, and FME) and present the relevant references. © 2011 American Institute of Physics

  16. A comparative study of Rosenbrock-type and implicit Runge-Kutta time integration for discontinuous Galerkin method for unsteady 3D compressible Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xiaodong; Xia, Yidong; Luo, Hong

    A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less

  17. A comparative study of Rosenbrock-type and implicit Runge-Kutta time integration for discontinuous Galerkin method for unsteady 3D compressible Navier-Stokes equations

    DOE PAGES

    Liu, Xiaodong; Xia, Yidong; Luo, Hong; ...

    2016-10-05

    A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less

  18. Secure annotation for medical images based on reversible watermarking in the Integer Fibonacci-Haar transform domain

    NASA Astrophysics Data System (ADS)

    Battisti, F.; Carli, M.; Neri, A.

    2011-03-01

    The increasing use of digital image-based applications is resulting in huge databases that are often difficult to use and prone to misuse and privacy concerns. These issues are especially crucial in medical applications. The most commonly adopted solution is the encryption of both the image and the patient data in separate files that are then linked. This practice results to be inefficient since, in order to retrieve patient data or analysis details, it is necessary to decrypt both files. In this contribution, an alternative solution for secure medical image annotation is presented. The proposed framework is based on the joint use of a key-dependent wavelet transform, the Integer Fibonacci-Haar transform, of a secure cryptographic scheme, and of a reversible watermarking scheme. The system allows: i) the insertion of the patient data into the encrypted image without requiring the knowledge of the original image, ii) the encryption of annotated images without causing loss in the embedded information, and iii) due to the complete reversibility of the process, it allows recovering the original image after the mark removal. Experimental results show the effectiveness of the proposed scheme.

  19. High-bandwidth generation of duobinary and alternate-mark-inversion modulation formats using SOA-based signal processing.

    PubMed

    Dailey, James M; Power, Mark J; Webb, Roderick P; Manning, Robert J

    2011-12-19

    We report on the novel all-optical generation of duobinary (DB) and alternate-mark-inversion (AMI) modulation formats at 42.6 Gb/s from an input on-off keyed signal. The modulation converter consists of two semiconductor optical amplifier (SOA)-based Mach-Zehnder interferometer gates. A detailed SOA model numerically confirms the operational principles and experimental data shows successful AMI and DB conversion at 42.6 Gb/s. We also predict that the operational bandwidth can be extended beyond 40 Gb/s by utilizing a new pattern-effect suppression scheme, and demonstrate dramatic reductions in patterning up to 160 Gb/s. We show an increasing trade-off between pattern-effect reduction and mean output power with increasing bitrate.

  20. Self-Organizing Map Neural Network-Based Nearest Neighbor Position Estimation Scheme for Continuous Crystal PET Detectors

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Li, Deng; Lu, Xiaoming; Cheng, Xinyi; Wang, Liwei

    2014-10-01

    Continuous crystal-based positron emission tomography (PET) detectors could be an ideal alternative for current high-resolution pixelated PET detectors if the issues of high performance γ interaction position estimation and its real-time implementation are solved. Unfortunately, existing position estimators are not very feasible for implementation on field-programmable gate array (FPGA). In this paper, we propose a new self-organizing map neural network-based nearest neighbor (SOM-NN) positioning scheme aiming not only at providing high performance, but also at being realistic for FPGA implementation. Benefitting from the SOM feature mapping mechanism, the large set of input reference events at each calibration position is approximated by a small set of prototypes, and the computation of the nearest neighbor searching for unknown events is largely reduced. Using our experimental data, the scheme was evaluated, optimized and compared with the smoothed k-NN method. The spatial resolutions of full-width-at-half-maximum (FWHM) of both methods averaged over the center axis of the detector were obtained as 1.87 ±0.17 mm and 1.92 ±0.09 mm, respectively. The test results show that the SOM-NN scheme has an equivalent positioning performance with the smoothed k-NN method, but the amount of computation is only about one-tenth of the smoothed k-NN method. In addition, the algorithm structure of the SOM-NN scheme is more feasible for implementation on FPGA. It has the potential to realize real-time position estimation on an FPGA with a high-event processing throughput.

  1. Non-linear eigensolver-based alternative to traditional SCF methods

    NASA Astrophysics Data System (ADS)

    Gavin, B.; Polizzi, E.

    2013-05-01

    The self-consistent procedure in electronic structure calculations is revisited using a highly efficient and robust algorithm for solving the non-linear eigenvector problem, i.e., H({ψ})ψ = Eψ. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm to account for the non-linearity of the Hamiltonian with the occupied eigenvectors. Using a series of numerical examples and the density functional theory-Kohn/Sham model, it will be shown that our approach can outperform the traditional SCF mixing-scheme techniques by providing a higher converge rate, convergence to the correct solution regardless of the choice of the initial guess, and a significant reduction of the eigenvalue solve time in simulations.

  2. Demonstration of micro-projection enabled short-range communication system for 5G.

    PubMed

    Chou, Hsi-Hsir; Tsai, Cheng-Yu

    2016-06-13

    A liquid crystal on silicon (LCoS) based polarization modulated image (PMI) system architecture using red-, green- and blue-based light-emitting diodes (LEDs), which offers simultaneous micro-projection and high-speed data transmission at nearly a gigabit, serving as an alternative short-range communication (SRC) approach for personal communication device (PCD) application in 5G, is proposed and experimentally demonstrated. In order to make the proposed system architecture transparent to the future possible wireless data modulation format, baseband modulation schemes such as multilevel pulse amplitude modulation (M-PAM), M-ary phase shift keying modulation (M-PSK) and M-ary quadrature amplitude modulation (M-QAM) which can be further employed by more advanced multicarrier modulation schemes (such as DMT, OFDM and CAP) were used to investigate the highest possible data transmission rate of the proposed system architecture. The results demonstrated that an aggregative data transmission rate of 892 Mb/s and 900 Mb/s at a BER of 10^(-3) can be achieved by using 16-QAM baseband modulation scheme when data transmission were performed with and without micro-projection simultaneously.

  3. Application Research of Horn Array Multi-Beam Antenna in Reference Source System for Satellite Interference Location

    NASA Astrophysics Data System (ADS)

    Zhou, Ping; Lin, Hui; Zhang, Qi

    2018-01-01

    The reference source system is a key factor to ensure the successful location of the satellite interference source. Currently, the traditional system used a mechanical rotating antenna which leaded to the disadvantages of slow rotation and high failure-rate, which seriously restricted the system’s positioning-timeliness and became its obvious weaknesses. In this paper, a multi-beam antenna scheme based on the horn array was proposed as a reference source for the satellite interference location, which was used as an alternative to the traditional reference source antenna. The new scheme has designed a small circularly polarized horn antenna as an element and proposed a multi-beamforming algorithm based on planar array. Moreover, the simulation analysis of horn antenna pattern, multi-beam forming algorithm and simulated satellite link cross-ambiguity calculation have been carried out respectively. Finally, cross-ambiguity calculation of the traditional reference source system has also been tested. The comparison between the results of computer simulation and the actual test results shows that the scheme is scientific and feasible, obviously superior to the traditional reference source system.

  4. PBT assessment under REACH: Screening for low aquatic bioaccumulation with QSAR classifications based on physicochemical properties to replace BCF in vivo testing on fish.

    PubMed

    Nendza, Monika; Kühne, Ralph; Lombardo, Anna; Strempel, Sebastian; Schüürmann, Gerrit

    2018-03-01

    Aquatic bioconcentration factors (BCFs) are critical in PBT (persistent, bioaccumulative, toxic) and risk assessment of chemicals. High costs and use of more than 100 fish per standard BCF study (OECD 305) call for alternative methods to replace as much in vivo testing as possible. The BCF waiving scheme is a screening tool combining QSAR classifications based on physicochemical properties related to the distribution (hydrophobicity, ionisation), persistence (biodegradability, hydrolysis), solubility and volatility (Henry's law constant) of substances in water bodies and aquatic biota to predict substances with low aquatic bioaccumulation (nonB, BCF<2000). The BCF waiving scheme was developed with a dataset of reliable BCFs for 998 compounds and externally validated with another 181 substances. It performs with 100% sensitivity (no false negatives), >50% efficacy (waiving potential), and complies with the OECD principles for valid QSARs. The chemical applicability domain of the BCF waiving scheme is given by the structures of the training set, with some compound classes explicitly excluded like organometallics, poly- and perfluorinated compounds, aromatic triphenylphosphates, surfactants. The prediction confidence of the BCF waiving scheme is based on applicability domain compliance, consensus modelling, and the structural similarity with known nonB and B/vB substances. Compounds classified as nonB by the BCF waiving scheme are candidates for waiving of BCF in vivo testing on fish due to low concern with regard to the B criterion. The BCF waiving scheme supports the 3Rs with a possible reduction of >50% of BCF in vivo testing on fish. If the target chemical is outside the applicability domain of the BCF waiving scheme or not classified as nonB, further assessments with in silico, in vitro or in vivo methods are necessary to either confirm or reject bioaccumulative behaviour. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Alternating direction implicit methods for parabolic equations with a mixed derivative

    NASA Technical Reports Server (NTRS)

    Beam, R. M.; Warming, R. F.

    1980-01-01

    Alternating direction implicit (ADI) schemes for two-dimensional parabolic equations with a mixed derivative are constructed by using the class of all A(0)-stable linear two-step methods in conjunction with the method of approximate factorization. The mixed derivative is treated with an explicit two-step method which is compatible with an implicit A(0)-stable method. The parameter space for which the resulting ADI schemes are second-order accurate and unconditionally stable is determined. Some numerical examples are given.

  6. Alternating direction implicit methods for parabolic equations with a mixed derivative

    NASA Technical Reports Server (NTRS)

    Beam, R. M.; Warming, R. F.

    1979-01-01

    Alternating direction implicit (ADI) schemes for two-dimensional parabolic equations with a mixed derivative are constructed by using the class of all A sub 0-stable linear two-step methods in conjunction with the method of approximation factorization. The mixed derivative is treated with an explicit two-step method which is compatible with an implicit A sub 0-stable method. The parameter space for which the resulting ADI schemes are second order accurate and unconditionally stable is determined. Some numerical examples are given.

  7. An improved fault detection classification and location scheme based on wavelet transform and artificial neural network for six phase transmission line using single end data only.

    PubMed

    Koley, Ebha; Verma, Khushaboo; Ghosh, Subhojit

    2015-01-01

    Restrictions on right of way and increasing power demand has boosted development of six phase transmission. It offers a viable alternative for transmitting more power, without major modification in existing structure of three phase double circuit transmission system. Inspite of the advantages, low acceptance of six phase system is attributed to the unavailability of a proper protection scheme. The complexity arising from large number of possible faults in six phase lines makes the protection quite challenging. The proposed work presents a hybrid wavelet transform and modular artificial neural network based fault detector, classifier and locator for six phase lines using single end data only. The standard deviation of the approximate coefficients of voltage and current signals obtained using discrete wavelet transform are applied as input to the modular artificial neural network for fault classification and location. The proposed scheme has been tested for all 120 types of shunt faults with variation in location, fault resistance, fault inception angles. The variation in power system parameters viz. short circuit capacity of the source and its X/R ratio, voltage, frequency and CT saturation has also been investigated. The result confirms the effectiveness and reliability of the proposed protection scheme which makes it ideal for real time implementation.

  8. A paralleled readout system for an electrical DNA-hybridization assay based on a microstructured electrode array

    NASA Astrophysics Data System (ADS)

    Urban, Matthias; Möller, Robert; Fritzsche, Wolfgang

    2003-02-01

    DNA analytics is a growing field based on the increasing knowledge about the genome with special implications for the understanding of molecular bases for diseases. Driven by the need for cost-effective and high-throughput methods for molecular detection, DNA chips are an interesting alternative to more traditional analytical methods in this field. The standard readout principle for DNA chips is fluorescence based. Fluorescence is highly sensitive and broadly established, but shows limitations regarding quantification (due to signal and/or dye instability) and the need for sophisticated (and therefore high-cost) equipment. This article introduces a readout system for an alternative detection scheme based on electrical detection of nanoparticle-labeled DNA. If labeled DNA is present in the analyte solution, it will bind on complementary capture DNA immobilized in a microelectrode gap. A subsequent metal enhancement step leads to a deposition of conductive material on the nanoparticles, and finally an electrical contact between the electrodes. This detection scheme offers the potential for a simple (low-cost as well as robust) and highly miniaturizable method, which could be well-suited for point-of-care applications in the context of lab-on-a-chip technologies. The demonstrated apparatus allows a parallel readout of an entire array of microstructured measurement sites. The readout is combined with data-processing by an embedded personal computer, resulting in an autonomous instrument that measures and presents the results. The design and realization of such a system is described, and first measurements are presented.

  9. Evaluating Prototype Tasks and Alternative Rating Schemes for a New ESL Writing Test through G-Theory

    ERIC Educational Resources Information Center

    Lee, Yong-Won; Kantor, Robert

    2007-01-01

    Possible integrated and independent tasks were pilot tested for the writing section of a new generation of the TOEFL[R] (Test of English as a Foreign Language[TM]). This study examines the impact of various rating designs and of the number of tasks and raters on the reliability of writing scores based on integrated and independent tasks from the…

  10. Adaptive Critic Neural Network-Based Terminal Area Energy Management and Approach and Landing Guidance

    NASA Technical Reports Server (NTRS)

    Grantham, Katie

    2003-01-01

    Reusable Launch Vehicles (RLVs) have different mission requirements than the Space Shuttle, which is used for benchmark guidance design. Therefore, alternative Terminal Area Energy Management (TAEM) and Approach and Landing (A/L) Guidance schemes can be examined in the interest of cost reduction. A neural network based solution for a finite horizon trajectory optimization problem is presented in this paper. In this approach the optimal trajectory of the vehicle is produced by adaptive critic based neural networks, which were trained off-line to maintain a gradual glideslope.

  11. "Trees Live on Soil and Sunshine!"--Coexistence of Scientific and Alternative Conception of Tree Assimilation.

    PubMed

    Thorn, Christine Johanna; Bissinger, Kerstin; Thorn, Simon; Bogner, Franz Xaver

    2016-01-01

    Successful learning is the integration of new knowledge into existing schemes, leading to an integrated and correct scientific conception. By contrast, the co-existence of scientific and alternative conceptions may indicate a fragmented knowledge profile. Every learner is unique and thus carries an individual set of preconceptions before classroom engagement due to prior experiences. Hence, instructors and teachers have to consider the heterogeneous knowledge profiles of their class when teaching. However, determinants of fragmented knowledge profiles are not well understood yet, which may hamper a development of adapted teaching schemes. We used a questionnaire-based approach to assess conceptual knowledge of tree assimilation and wood synthesis surveying 885 students of four educational levels: 6th graders, 10th graders, natural science freshmen and other academic studies freshmen. We analysed the influence of learner's characteristics such as educational level, age and sex on the coexistence of scientific and alternative conceptions. Within all subsamples well-known alternative conceptions regarding tree assimilation and wood synthesis coexisted with correct scientific ones. For example, students describe trees to be living on "soil and sunshine", representing scientific knowledge of photosynthesis mingled with an alternative conception of trees eating like animals. Fragmented knowledge profiles occurred in all subsamples, but our models showed that improved education and age foster knowledge integration. Sex had almost no influence on the existing scientific conceptions and evolution of knowledge integration. Consequently, complex biological issues such as tree assimilation and wood synthesis need specific support e.g. through repeated learning units in class- and seminar-rooms in order to help especially young students to handle and overcome common alternative conceptions and appropriately integrate scientific conceptions into their knowledge profile.

  12. “Trees Live on Soil and Sunshine!”- Coexistence of Scientific and Alternative Conception of Tree Assimilation

    PubMed Central

    Thorn, Simon; Bogner, Franz Xaver

    2016-01-01

    Successful learning is the integration of new knowledge into existing schemes, leading to an integrated and correct scientific conception. By contrast, the co-existence of scientific and alternative conceptions may indicate a fragmented knowledge profile. Every learner is unique and thus carries an individual set of preconceptions before classroom engagement due to prior experiences. Hence, instructors and teachers have to consider the heterogeneous knowledge profiles of their class when teaching. However, determinants of fragmented knowledge profiles are not well understood yet, which may hamper a development of adapted teaching schemes. We used a questionnaire-based approach to assess conceptual knowledge of tree assimilation and wood synthesis surveying 885 students of four educational levels: 6th graders, 10th graders, natural science freshmen and other academic studies freshmen. We analysed the influence of learner’s characteristics such as educational level, age and sex on the coexistence of scientific and alternative conceptions. Within all subsamples well-known alternative conceptions regarding tree assimilation and wood synthesis coexisted with correct scientific ones. For example, students describe trees to be living on “soil and sunshine”, representing scientific knowledge of photosynthesis mingled with an alternative conception of trees eating like animals. Fragmented knowledge profiles occurred in all subsamples, but our models showed that improved education and age foster knowledge integration. Sex had almost no influence on the existing scientific conceptions and evolution of knowledge integration. Consequently, complex biological issues such as tree assimilation and wood synthesis need specific support e.g. through repeated learning units in class- and seminar-rooms in order to help especially young students to handle and overcome common alternative conceptions and appropriately integrate scientific conceptions into their knowledge profile. PMID:26807974

  13. Adaptive angular-velocity Vold-Kalman filter order tracking - Theoretical basis, numerical implementation and parameter investigation

    NASA Astrophysics Data System (ADS)

    Pan, M.-Ch.; Chu, W.-Ch.; Le, Duc-Do

    2016-12-01

    The paper presents an alternative Vold-Kalman filter order tracking (VKF_OT) method, i.e. adaptive angular-velocity VKF_OT technique, to extract and characterize order components in an adaptive manner for the condition monitoring and fault diagnosis of rotary machinery. The order/spectral waveforms to be tracked can be recursively solved by using Kalman filter based on the one-step state prediction. The paper comprises theoretical derivation of computation scheme, numerical implementation, and parameter investigation. Comparisons of the adaptive VKF_OT scheme with two other ones are performed through processing synthetic signals of designated order components. Processing parameters such as the weighting factor and the correlation matrix of process noise, and data conditions like the sampling frequency, which influence tracking behavior, are explored. The merits such as adaptive processing nature and computation efficiency brought by the proposed scheme are addressed although the computation was performed in off-line conditions. The proposed scheme can simultaneously extract multiple spectral components, and effectively decouple close and crossing orders associated with multi-axial reference rotating speeds.

  14. Pharmaceutical cost-containment policies and sustainability: recent Irish experience.

    PubMed

    Kenneally, Martin; Walshe, Valerie

    2012-01-01

    Our objective is to review and assess the main pharmaceutical cost-containment policies used in Ireland in recent years, and to highlight how a policy that improved fiscal sustainability but worsened economic sustainability could have improved both if an option-based approach was implemented. The main public pharmaceutical cost-containment policy measures including reducing the ex-factory price of drugs, pharmacy dispensing fees and community drug scheme coverage, and increasing patient copayments are outlined along with the resulting savings. We quantify the cost implications of a new policy that restricts the entitlement to free prescription drugs of persons older than 70 years and propose an alternative option-based policy that reduces the total cost to both the state and the patient. This set of policy measures reduced public spending on community drugs by an estimated €380m in 2011. The policy restricting free prescription drugs for persons older than 70 years, though effective in reducing public cost, increased the total cost of the drugs supplied. The policy-induced cost increase stems from a fees anomaly between the two main community drugs schemes which is circumvented by our alternative option-based policy. Our findings highlight the need for policymakers, even when absorbed with reducing cost, to design cost-containment policies that are both fiscally and economically sustainable. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  15. Cooling schemes for two-component fermions in layered optical lattices

    NASA Astrophysics Data System (ADS)

    Goto, Shimpei; Danshita, Ippei

    2017-12-01

    Recently, a cooling scheme for ultracold atoms in a bilayer optical lattice has been proposed (A. Kantian et al., arXiv:1609.03579). In their scheme, the energy offset between the two layers is increased dynamically such that the entropy of one layer is transferred to the other layer. Using the full-Hilbert-space approach, we compute cooling dynamics subjected to the scheme in order to show that their scheme fails to cool down two-component fermions. We develop an alternative cooling scheme for two-component fermions, in which the spin-exchange interaction of one layer is significantly reduced. Using both full-Hilbert-space and matrix-product-state approaches, we find that our scheme can decrease the temperature of the other layer by roughly half.

  16. Qualitative Analysis: The Current Status.

    ERIC Educational Resources Information Center

    Cole, G. Mattney, Jr.; Waggoner, William H.

    1983-01-01

    To assist in designing/implementing qualitative analysis courses, examines reliability/accuracy of several published separation schemes, notes methods where particular difficulties arise (focusing on Groups II/III), and presents alternative schemes for the separation of these groups. Only cation analyses are reviewed. Figures are presented in…

  17. DISSECT: a new mnemonic-based approach to the categorization of aortic dissection.

    PubMed

    Dake, M D; Thompson, M; van Sambeek, M; Vermassen, F; Morales, J P

    2013-08-01

    Classification systems for aortic dissection provide important guides to clinical decision-making, but the relevance of traditional categorization schemes is being questioned in an era when endovascular techniques are assuming a growing role in the management of this frequently complex and catastrophic entity. In recognition of the expanding range of interventional therapies now used as alternatives to conventional treatment approaches, the Working Group on Aortic Diseases of the DEFINE Project developed a categorization system that features the specific anatomic and clinical manifestations of the disease process that are most relevant to contemporary decision-making. The DISSECT classification system is a mnemonic-based approach to the evaluation of aortic dissection. It guides clinicians through an assessment of six critical characteristics that facilitate optimal communication of the most salient details that currently influence the selection of a therapeutic option, including those findings that are key when considering an endovascular procedure, but are not taken into account by the DeBakey or Stanford categorization schemes. The six features of aortic dissection include: duration of disease; intimal tear location; size of the dissected aorta; segmental extent of aortic involvement; clinical complications of the dissection, and thrombus within the aortic false lumen. In current clinical practice, endovascular therapy is increasingly considered as an alternative to medical management or open surgical repair in select cases of type B aortic dissection. Currently, endovascular aortic repair is not used for patients with type A aortic dissection, but catheter-based techniques directed at peripheral branch vessel ischemia that may complicate type A dissection are considered valuable adjunctive interventions, when indicated. The use of a new system for categorization of aortic dissection, DISSECT, addresses the shortcomings of well-known established schemes devised more than 40 years ago, before the introduction of endovascular techniques. It will serve as a guide to support a critical analysis of contemporary therapeutic options and inform management decisions based on specific features of the disease process. Copyright © 2013 European Society for Vascular Surgery. All rights reserved.

  18. Uncertainty analysis for effluent trading planning using a Bayesian estimation-based simulation-optimization modeling approach.

    PubMed

    Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J

    2017-06-01

    In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision alternatives. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Patterning and templating for nanoelectronics.

    PubMed

    Galatsis, Kosmas; Wang, Kang L; Ozkan, Mihri; Ozkan, Cengiz S; Huang, Yu; Chang, Jane P; Monbouquette, Harold G; Chen, Yong; Nealey, Paul; Botros, Youssry

    2010-02-09

    The semiconductor industry will soon be launching 32 nm complementary metal oxide semiconductor (CMOS) technology node using 193 nm lithography patterning technology to fabricate microprocessors with more than 2 billion transistors. To ensure the survival of Moore's law, alternative patterning techniques that offer advantages beyond conventional top-down patterning are aggressively being explored. It is evident that most alternative patterning techniques may not offer compelling advantages to succeed conventional top-down lithography for silicon integrated circuits, but alternative approaches may well indeed offer functional advantages in realising next-generation information processing nanoarchitectures such as those based on cellular, bioinsipired, magnetic dot logic, and crossbar schemes. This paper highlights and evaluates some patterning methods from the Center on Functional Engineered Nano Architectonics in Los Angeles and discusses key benchmarking criteria with respect to CMOS scaling.

  20. Alternative Packaging for Back-Illuminated Imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata

    2009-01-01

    An alternative scheme has been conceived for packaging of silicon-based back-illuminated, back-side-thinned complementary metal oxide/semiconductor (CMOS) and charge-coupled-device image-detector integrated circuits, including an associated fabrication process. This scheme and process are complementary to those described in "Making a Back-Illuminated Imager With Back-Side Connections" (NPO-42839), NASA Tech Briefs, Vol. 32, No. 7 (July 2008), page 38. To avoid misunderstanding, it should be noted that in the terminology of imaging integrated circuits, "front side" or "back side" does not necessarily refer to the side that, during operation, faces toward or away from a source of light or other object to be imaged. Instead, "front side" signifies that side of a semiconductor substrate upon which the pixel pattern and the associated semiconductor devices and metal conductor lines are initially formed during fabrication, and "back side" signifies the opposite side. If the imager is of the type called "back-illuminated," then the back side is the one that faces an object to be imaged. Initially, a back-illuminated, back-side-thinned image-detector is fabricated with its back side bonded to a silicon handle wafer. At a subsequent stage of fabrication, the front side is bonded to a glass wafer (for mechanical support) and the silicon handle wafer is etched away to expose the back side. The frontside integrated circuitry includes metal input/output contact pads, which are rendered inaccessible by the bonding of the front side to the glass wafer. Hence, one of the main problems is to make the input/output contact pads accessible from the back side, which is ultimately to be the side accessible to the external world. The present combination of an alternative packaging scheme and associated fabrication process constitute a solution of the problem.

  1. Stroke prevention with oral anticoagulation in older people with atrial fibrillation - a pragmatic approach.

    PubMed

    Ali, Ali; Bailey, Claire; Abdelhafiz, Ahmed H

    2012-08-01

    With advancing age, the prevalence of both stroke and non valvular atrial fibrillation (NVAF) is increasing. NVAF in old age has a high embolic potential if not anticoagulated. Oral anticoagulation therapy is cost effective in older people with NVAF due to their high base line stroke risk. The current stroke and bleeding risk scoring schemes have been based on complex scoring systems that are difficult to apply in clinical practice. Both scoring schemes include similar risk factors for ischemic and bleeding events which may lead to confusion in clinical decision making to balance the risks of bleeding against the risks of stroke, thereby limiting the applicability of such schemes. The difficulty in application of such schemes combined with physicians' fear of inducing bleeding complications has resulted in under use of anticoagulation therapy in older people. As older people (≥75 years) with NVAF are all at high risk of stroke, we are suggesting a pragmatic approach based on a yes/no decision rather than a risk scoring stratification which involves an opt out rather an opt in approach unless there is a contraindication for oral anticoagulation. Antiplatelet agents should not be an alternative option for antithrombotic treatment in older people with NVAF due to lack of efficacy and the potential of being used as an excuse of not prescribing anticoagulation. Bleeding risk should be assessed on individual basis and the decision to anticoagulate should include patients' views.

  2. Stroke Prevention with Oral Anticoagulation in Older People with Atrial Fibrillation - A Pragmatic Approach

    PubMed Central

    Ali, Ali; Bailey, Claire; Abdelhafiz, Ahmed H

    2012-01-01

    With advancing age, the prevalence of both stroke and non valvular atrial fibrillation (NVAF) is increasing. NVAF in old age has a high embolic potential if not anticoagulated. Oral anticoagulation therapy is cost effective in older people with NVAF due to their high base line stroke risk. The current stroke and bleeding risk scoring schemes have been based on complex scoring systems that are difficult to apply in clinical practice. Both scoring schemes include similar risk factors for ischemic and bleeding events which may lead to confusion in clinical decision making to balance the risks of bleeding against the risks of stroke, thereby limiting the applicability of such schemes. The difficulty in application of such schemes combined with physicians’ fear of inducing bleeding complications has resulted in under use of anticoagulation therapy in older people. As older people (≥75 years) with NVAF are all at high risk of stroke, we are suggesting a pragmatic approach based on a yes/no decision rather than a risk scoring stratification which involves an opt out rather an opt in approach unless there is a contraindication for oral anticoagulation. Antiplatelet agents should not be an alternative option for antithrombotic treatment in older people with NVAF due to lack of efficacy and the potential of being used as an excuse of not prescribing anticoagulation. Bleeding risk should be assessed on individual basis and the decision to anticoagulate should include patients’ views. PMID:23185715

  3. Optimized resolved rate control of seven-degree-of-freedom Laboratory Telerobotic Manipulator (LTM) with application to three-dimensional graphics simulation

    NASA Technical Reports Server (NTRS)

    Barker, L. Keith; Mckinney, William S., Jr.

    1989-01-01

    The Laboratory Telerobotic Manipulator (LTM) is a seven-degree-of-freedom robot arm. Two of the arms were delivered to Langley Research Center for ground-based research to assess the use of redundant degree-of-freedom robot arms in space operations. Resolved-rate control equations for the LTM are derived. The equations are based on a scheme developed at the Oak Ridge National Laboratory for computing optimized joint angle rates in real time. The optimized joint angle rates actually represent a trade-off, as the hand moves, between small rates (least-squares solution) and those rates which work toward satisfying a specified performance criterion of joint angles. In singularities where the optimization scheme cannot be applied, alternate control equations are devised. The equations developed were evaluated using a real-time computer simulation to control a 3-D graphics model of the LTM.

  4. Hidden Markov model tracking of continuous gravitational waves from young supernova remnants

    NASA Astrophysics Data System (ADS)

    Sun, L.; Melatos, A.; Suvorova, S.; Moran, W.; Evans, R. J.

    2018-02-01

    Searches for persistent gravitational radiation from nonpulsating neutron stars in young supernova remnants are computationally challenging because of rapid stellar braking. We describe a practical, efficient, semicoherent search based on a hidden Markov model tracking scheme, solved by the Viterbi algorithm, combined with a maximum likelihood matched filter, the F statistic. The scheme is well suited to analyzing data from advanced detectors like the Advanced Laser Interferometer Gravitational Wave Observatory (Advanced LIGO). It can track rapid phase evolution from secular stellar braking and stochastic timing noise torques simultaneously without searching second- and higher-order derivatives of the signal frequency, providing an economical alternative to stack-slide-based semicoherent algorithms. One implementation tracks the signal frequency alone. A second implementation tracks the signal frequency and its first time derivative. It improves the sensitivity by a factor of a few upon the first implementation, but the cost increases by 2 to 3 orders of magnitude.

  5. GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING

    PubMed Central

    Liu, Hongcheng; Yao, Tao; Li, Runze

    2015-01-01

    This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126

  6. Multi-criteria analysis for the determination of the best WEEE management scenario in Cyprus.

    PubMed

    Rousis, K; Moustakas, K; Malamis, S; Papadopoulos, A; Loizidou, M

    2008-01-01

    Waste from electrical and electronic equipment (WEEE) constitutes one of the most complicated solid waste streams in terms of its composition, and, as a result, it is difficult to be effectively managed. In view of the environmental problems derived from WEEE management, many countries have established national legislation to improve the reuse, recycling and other forms of recovery of this waste stream so as to apply suitable management schemes. In this work, alternative systems are examined for the WEEE management in Cyprus. These systems are evaluated by developing and applying the Multi-Criteria Decision Making (MCDM) method PROMETHEE. In particular, through this MCDM method, 12 alternative management systems were compared and ranked according to their performance and efficiency. The obtained results show that the management schemes/systems based on partial disassembly are the most suitable for implementation in Cyprus. More specifically, the optimum scenario/system that can be implemented in Cyprus is that of partial disassembly and forwarding of recyclable materials to the native existing market and disposal of the residues at landfill sites.

  7. Alternative irradiation schemes for NIF and LMJ hohlraums

    NASA Astrophysics Data System (ADS)

    Bourgade, Jean-Luc; Bowen, Christopher; Gauthier, Pascal; Landen, Otto

    2018-02-01

    We explore two alternative irradiation schemes for the large (‘outer’) and small (‘inner’) angle beams that currently illuminate National Ignition Facility (NIF) and Laser Mégajoule cavities. In the first, while the outer laser beams enter through the usual end laser entrance holes (LEH), the inner beams enter through slots along the cavity axis wall, illuminating the back wall of the cavity. This avoids the current interaction of the inner laser beams with the gold wall bubbles generated by the outer beams, which leads to large time-dependent changes in drive symmetry. Another scheme potentially useful for NIF uses only the outer beams. The radiative losses through the slots or from the use of outer beams only are compensated by using a smaller cavity and LEH.

  8. Alternative irradiation schemes for NIF and LMJ hohlraums

    DOE PAGES

    Bourgade, Jean-Luc; Bowen, Christopher; Gauthier, Pascal; ...

    2017-12-13

    Here, we explore two alternative irradiation schemes for the large ('outer') and small ('inner') angle beams that currently illuminate National Ignition Facility (NIF) and Laser Mégajoule cavities. In the first, while the outer laser beams enter through the usual end laser entrance holes (LEH), the inner beams enter through slots along the cavity axis wall, illuminating the back wall of the cavity. This avoids the current interaction of the inner laser beams with the gold wall bubbles generated by the outer beams, which leads to large time-dependent changes in drive symmetry. Another scheme potentially useful for NIF uses only themore » outer beams. The radiative losses through the slots or from the use of outer beams only are compensated by using a smaller cavity and LEH.« less

  9. Alternative irradiation schemes for NIF and LMJ hohlraums

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bourgade, Jean-Luc; Bowen, Christopher; Gauthier, Pascal

    Here, we explore two alternative irradiation schemes for the large ('outer') and small ('inner') angle beams that currently illuminate National Ignition Facility (NIF) and Laser Mégajoule cavities. In the first, while the outer laser beams enter through the usual end laser entrance holes (LEH), the inner beams enter through slots along the cavity axis wall, illuminating the back wall of the cavity. This avoids the current interaction of the inner laser beams with the gold wall bubbles generated by the outer beams, which leads to large time-dependent changes in drive symmetry. Another scheme potentially useful for NIF uses only themore » outer beams. The radiative losses through the slots or from the use of outer beams only are compensated by using a smaller cavity and LEH.« less

  10. A Learning Based Fiducial-driven Registration Scheme for Evaluating Laser Ablation Changes in Neurological Disorders.

    PubMed

    Wan, Tao; Bloch, B Nicolas; Danish, Shabbar; Madabhushi, Anant

    2014-11-20

    In this work, we present a novel learning based fiducial driven registration (LeFiR) scheme which utilizes a point matching technique to identify the optimal configuration of landmarks to better recover deformation between a target and a moving image. Moreover, we employ the LeFiR scheme to model the localized nature of deformation introduced by a new treatment modality - laser induced interstitial thermal therapy (LITT) for treating neurological disorders. Magnetic resonance (MR) guided LITT has recently emerged as a minimally invasive alternative to craniotomy for local treatment of brain diseases (such as glioblastoma multiforme (GBM), epilepsy). However, LITT is currently only practised as an investigational procedure world-wide due to lack of data on longer term patient outcome following LITT. There is thus a need to quantitatively evaluate treatment related changes between post- and pre-LITT in terms of MR imaging markers. In order to validate LeFiR, we tested the scheme on a synthetic brain dataset (SBD) and in two real clinical scenarios for treating GBM and epilepsy with LITT. Four experiments under different deformation profiles simulating localized ablation effects of LITT on MRI were conducted on 286 pairs of SBD images. The training landmark configurations were obtained through 2000 iterations of registration where the points with consistently best registration performance were selected. The estimated landmarks greatly improved the quality metrics compared to a uniform grid (UniG) placement scheme, a speeded-up robust features (SURF) based method, and a scale-invariant feature transform (SIFT) based method as well as a generic free-form deformation (FFD) approach. The LeFiR method achieved average 90% improvement in recovering the local deformation compared to 82% for the uniform grid placement, 62% for the SURF based approach, and 16% for the generic FFD approach. On the real GBM and epilepsy data, the quantitative results showed that LeFiR outperformed UniG by 28% improvement in average.

  11. Comparison of two matrix data structures for advanced CSM testbed applications

    NASA Technical Reports Server (NTRS)

    Regelbrugge, M. E.; Brogan, F. A.; Nour-Omid, B.; Rankin, C. C.; Wright, M. A.

    1989-01-01

    The first section describes data storage schemes presently used by the Computational Structural Mechanics (CSM) testbed sparse matrix facilities and similar skyline (profile) matrix facilities. The second section contains a discussion of certain features required for the implementation of particular advanced CSM algorithms, and how these features might be incorporated into the data storage schemes described previously. The third section presents recommendations, based on the discussions of the prior sections, for directing future CSM testbed development to provide necessary matrix facilities for advanced algorithm implementation and use. The objective is to lend insight into the matrix structures discussed and to help explain the process of evaluating alternative matrix data structures and utilities for subsequent use in the CSM testbed.

  12. Information-reduced Carrier Synchronization of Iterative Decoded BPSK and QPSK using Soft Decision (Extrinsic) Feedback

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Valles, Esteban; Jones, Christopher

    2008-01-01

    This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.

  13. Factors affecting commercial application of embryo technologies in dairy cattle in Europe--a modelling approach.

    PubMed

    van Arendonk, Johan A M; Bijma, Piter

    2003-01-15

    Reproductive techniques have a major impact on the structure of breeding programmes, the rate of genetic gain and dissemination of genetic gain in populations. This manuscript reviews the impact of reproductive technologies on the underlying components of genetic gain and inbreeding with special reference to the role of female reproductive technology. Evaluation of alternative breeding schemes should be based on genetic gain while constraining inbreeding. Optimum breeding schemes can be characterised by: decreased importance of sib information; increased accuracy at the expense of intensity; and a factorial mating strategy. If large-scale embryo cloning becomes feasible, this will have a small impact on the rate of genetic gain but will have a large impact on the structure of breeding programmes.

  14. Definition of a Robust Supervisory Control Scheme for Sodium-Cooled Fast Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponciroli, R.; Passerini, S.; Vilim, R. B.

    In this work, an innovative control approach for metal-fueled Sodium-cooled Fast Reactors is proposed. With respect to the classical approach adopted for base-load Nuclear Power Plants, an alternative control strategy for operating the reactor at different power levels by respecting the system physical constraints is presented. In order to achieve a higher operational flexibility along with ensuring that the implemented control loops do not influence the system inherent passive safety features, a dedicated supervisory control scheme for the dynamic definition of the corresponding set-points to be supplied to the PID controllers is designed. In particular, the traditional approach based onmore » the adoption of tabulated lookup tables for the set-point definition is found not to be robust enough when failures of the implemented SISO (Single Input Single Output) actuators occur. Therefore, a feedback algorithm based on the Reference Governor approach, which allows for the optimization of reference signals according to the system operating conditions, is proposed.« less

  15. The optimal dynamic immunization under a controlled heterogeneous node-based SIRS model

    NASA Astrophysics Data System (ADS)

    Yang, Lu-Xing; Draief, Moez; Yang, Xiaofan

    2016-05-01

    Dynamic immunizations, under which the state of the propagation network of electronic viruses can be changed by adjusting the control measures, are regarded as an alternative to static immunizations. This paper addresses the optimal dynamical immunization under the widely accepted SIRS assumption. First, based on a controlled heterogeneous node-based SIRS model, an optimal control problem capturing the optimal dynamical immunization is formulated. Second, the existence of an optimal dynamical immunization scheme is shown, and the corresponding optimality system is derived. Next, some numerical examples are given to show that an optimal immunization strategy can be worked out by numerically solving the optimality system, from which it is found that the network topology has a complex impact on the optimal immunization strategy. Finally, the difference between a payoff and the minimum payoff is estimated in terms of the deviation of the corresponding immunization strategy from the optimal immunization strategy. The proposed optimal immunization scheme is justified, because it can achieve a low level of infections at a low cost.

  16. Health courts: an alternative to traditional tort law.

    PubMed

    Miller, Lisa A

    2011-01-01

    The current adversarial tort-based system of adjudicating malpractice claims is flawed. Alternate methods of compensation for birth injuries related to oxygen deprivation or mechanical injury are being utilized in Virginia and Florida. Although utilization of both of these schemes is limited, and they are not without problems in application, both have been successful in reducing the number of malpractice claims in the tort system and in reducing malpractice premiums. While the Florida and Virginia programs are primarily focused on compensation, other models outside the US focus include compensation as well as enhanced dispute resolution and potential for clinical practice change through peer review. Experts in the fields of law and public policy in the United States have evaluated a variety of approaches and have proposed models for administrative health courts that would provide both compensation and dispute resolution for medical and nursing malpractice claims. These alternative models are based on transparency and disclosure, with just compensation for injuries, and opportunities for improvements in patient safety.

  17. An experiment-based comparative study of fuzzy logic control

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Chen, Yung-Yaw; Lee, Chuen-Chein; Murugesan, S.; Jang, Jyh-Shing

    1989-01-01

    An approach is presented to the control of a dynamic physical system through the use of approximate reasoning. The approach has been implemented in a program named POLE, and the authors have successfully built a prototype hardware system to solve the cartpole balancing problem in real-time. The approach provides a complementary alternative to the conventional analytical control methodology and is of substantial use when a precise mathematical model of the process being controlled is not available. A set of criteria for comparing controllers based on approximate reasoning and those based on conventional control schemes is furnished.

  18. Novel neural control for a class of uncertain pure-feedback systems.

    PubMed

    Shen, Qikun; Shi, Peng; Zhang, Tianping; Lim, Cheng-Chew

    2014-04-01

    This paper is concerned with the problem of adaptive neural tracking control for a class of uncertain pure-feedback nonlinear systems. Using the implicit function theorem and backstepping technique, a practical robust adaptive neural control scheme is proposed to guarantee that the tracking error converges to an adjusted neighborhood of the origin by choosing appropriate design parameters. In contrast to conventional Lyapunov-based design techniques, an alternative Lyapunov function is constructed for the development of control law and learning algorithms. Differing from the existing results in the literature, the control scheme does not need to compute the derivatives of virtual control signals at each step in backstepping design procedures. Furthermore, the scheme requires the desired trajectory and its first derivative rather than its first n derivatives. In addition, the useful property of the basis function of the radial basis function, which will be used in control design, is explored. Simulation results illustrate the effectiveness of the proposed techniques.

  19. On the applicability of STDP-based learning mechanisms to spiking neuron network models

    NASA Astrophysics Data System (ADS)

    Sboev, A.; Vlasov, D.; Serenko, A.; Rybka, R.; Moloshnikov, I.

    2016-11-01

    The ways to creating practically effective method for spiking neuron networks learning, that would be appropriate for implementing in neuromorphic hardware and at the same time based on the biologically plausible plasticity rules, namely, on STDP, are discussed. The influence of the amount of correlation between input and output spike trains on the learnability by different STDP rules is evaluated. A usability of alternative combined learning schemes, involving artificial and spiking neuron models is demonstrated on the iris benchmark task and on the practical task of gender recognition.

  20. Sensor Proxy Mobile IPv6 (SPMIPv6)—A Novel Scheme for Mobility Supported IP-WSNs

    PubMed Central

    Islam, Md. Motaharul; Huh, Eui-Nam

    2011-01-01

    IP based Wireless Sensor Networks (IP-WSNs) are gaining importance for their broad range of applications in health-care, home automation, environmental monitoring, industrial control, vehicle telematics and agricultural monitoring. In all these applications, mobility in the sensor network with special attention to energy efficiency is a major issue to be addressed. Host-based mobility management protocols are not suitable for IP-WSNs because of their energy inefficiency, so network based mobility management protocols can be an alternative for the mobility supported IP-WSNs. In this paper we propose a network based mobility supported IP-WSN protocol called Sensor Proxy Mobile IPv6 (SPMIPv6). We present its architecture, message formats and also evaluate its performance considering signaling cost, mobility cost and energy consumption. Our analysis shows that with respect to the number of IP-WSN nodes, the proposed scheme reduces the signaling cost by 60% and 56%, as well as the mobility cost by 62% and 57%, compared to MIPv6 and PMIPv6, respectively. The simulation results also show that in terms of the number of hops, SPMIPv6 decreases the signaling cost by 56% and 53% as well as mobility cost by 60% and 67% as compared to MIPv6 and PMIPv6 respectively. It also indicates that proposed scheme reduces the level of energy consumption significantly. PMID:22319386

  1. Sensor proxy mobile IPv6 (SPMIPv6)--a novel scheme for mobility supported IP-WSNs.

    PubMed

    Islam, Md Motaharul; Huh, Eui-Nam

    2011-01-01

    IP based Wireless Sensor Networks (IP-WSNs) are gaining importance for their broad range of applications in health-care, home automation, environmental monitoring, industrial control, vehicle telematics and agricultural monitoring. In all these applications, mobility in the sensor network with special attention to energy efficiency is a major issue to be addressed. Host-based mobility management protocols are not suitable for IP-WSNs because of their energy inefficiency, so network based mobility management protocols can be an alternative for the mobility supported IP-WSNs. In this paper we propose a network based mobility supported IP-WSN protocol called Sensor Proxy Mobile IPv6 (SPMIPv6). We present its architecture, message formats and also evaluate its performance considering signaling cost, mobility cost and energy consumption. Our analysis shows that with respect to the number of IP-WSN nodes, the proposed scheme reduces the signaling cost by 60% and 56%, as well as the mobility cost by 62% and 57%, compared to MIPv6 and PMIPv6, respectively. The simulation results also show that in terms of the number of hops, SPMIPv6 decreases the signaling cost by 56% and 53% as well as mobility cost by 60% and 67% as compared to MIPv6 and PMIPv6 respectively. It also indicates that proposed scheme reduces the level of energy consumption significantly.

  2. Efficient C1-continuous phase-potential upwind (C1-PPU) schemes for coupled multiphase flow and transport with gravity

    NASA Astrophysics Data System (ADS)

    Jiang, Jiamin; Younis, Rami M.

    2017-10-01

    In the presence of counter-current flow, nonlinear convergence problems may arise in implicit time-stepping when the popular phase-potential upwinding (PPU) scheme is used. The PPU numerical flux is non-differentiable across the co-current/counter-current flow regimes. This may lead to cycles or divergence in the Newton iterations. Recently proposed methods address improved smoothness of the numerical flux. The objective of this work is to devise and analyze an alternative numerical flux scheme called C1-PPU that, in addition to improving smoothness with respect to saturations and phase potentials, also improves the level of scalar nonlinearity and accuracy. C1-PPU involves a novel use of the flux limiter concept from the context of high-resolution methods, and allows a smooth variation between the co-current/counter-current flow regimes. The scheme is general and applies to fully coupled flow and transport formulations with an arbitrary number of phases. We analyze the consistency property of the C1-PPU scheme, and derive saturation and pressure estimates, which are used to prove the solution existence. Several numerical examples for two- and three-phase flows in heterogeneous and multi-dimensional reservoirs are presented. The proposed scheme is compared to the conventional PPU and the recently proposed Hybrid Upwinding schemes. We investigate three properties of these numerical fluxes: smoothness, nonlinearity, and accuracy. The results indicate that in addition to smoothness, nonlinearity may also be critical for convergence behavior and thus needs to be considered in the design of an efficient numerical flux scheme. Moreover, the numerical examples show that the C1-PPU scheme exhibits superior convergence properties for large time steps compared to the other alternatives.

  3. Achieving universal health coverage through voluntary insurance: what can we learn from the experience of Lao PDR?

    PubMed Central

    2013-01-01

    Background The Government of Lao Peoples’ Democratic Republic (Lao PDR) has embarked on a path to achieve universal health coverage (UHC) through implementation of four risk-protection schemes. One of these schemes is community-based health insurance (CBHI) – a voluntary scheme that targets roughly half the population. However, after 12 years of implementation, coverage through CBHI remains very low. Increasing coverage of the scheme would require expansion to households in both villages where CBHI is currently operating, and new geographic areas. In this study we explore the prospects of both types of expansion by examining household and district level data. Methods Using a household survey based on a case-comparison design of 3000 households, we examine the determinants of enrolment at the household level in areas where the scheme is currently operating. We model the determinants of enrolment using a probit model and predicted probabilities. Findings from focus group discussions are used to explain the quantitative findings. To examine the prospects for geographic scale-up, we use secondary data to compare characteristics of districts with and without insurance, using a combination of univariate and multivariate analyses. The multivariate analysis is a probit model, which models the factors associated with roll-out of CBHI to the districts. Results The household findings show that enrolment is concentrated among the better off and that adverse selection is present in the scheme. The district level findings show that to date, the scheme has been implemented in the most affluent areas, in closest proximity to the district hospitals, and in areas where quality of care is relatively good. Conclusions The household-level findings indicate that the scheme suffers from poor risk-pooling, which threatens financial sustainability. The district-level findings call into question whether or not the Government of Laos can successfully expand to more remote, less affluent districts, with lower population density. We discuss the policy implications of the findings and specifically address whether CBHI can serve as a foundation for a national scheme, while exploring alternative approaches to reaching the informal sector in Laos and other countries attempting to achieve UHC. PMID:24344925

  4. [Cost-effectiveness analysis of an alternative for the provision of primary health care for beneficiaries of Seguro Popular in Mexico].

    PubMed

    Figueroa-Lara, Alejandro; González-Block, Miguel A

    2016-01-01

    To estimate the cost-effectiveness ratio of public and private health care providers funded by Seguro Popular. A pilot contracting primary care health care scheme in the state of Hidalgo, Mexico, was evaluated through a population survey to assess quality of care and detection decreased of vision. Costs were assessed from the payer perspective using institutional sources.The alternatives analyzed were a private provider with capitated and performance-based payment modalities, and a public provider funded through budget subsidies. Sensitivity analysis was performed using Monte Carlo simulations. The private provider is dominant in the quality and cost-effective detection of decreased vision. Strategic purchasing of private providers of primary care has shown promising results as an alternative to improving quality of health services and reducing costs.

  5. A numerical scheme to calculate temperature and salinity dependent air-water transfer velocities for any gas

    NASA Astrophysics Data System (ADS)

    Johnson, M. T.

    2010-10-01

    The ocean-atmosphere flux of a gas can be calculated from its measured or estimated concentration gradient across the air-sea interface and the transfer velocity (a term representing the conductivity of the layers either side of the interface with respect to the gas of interest). Traditionally the transfer velocity has been estimated from empirical relationships with wind speed, and then scaled by the Schmidt number of the gas being transferred. Complex, physically based models of transfer velocity (based on more physical forcings than wind speed alone), such as the NOAA COARE algorithm, have more recently been applied to well-studied gases such as carbon dioxide and DMS (although many studies still use the simpler approach for these gases), but there is a lack of validation of such schemes for other, more poorly studied gases. The aim of this paper is to provide a flexible numerical scheme which will allow the estimation of transfer velocity for any gas as a function of wind speed, temperature and salinity, given data on the solubility and liquid molar volume of the particular gas. New and existing parameterizations (including a novel empirical parameterization of the salinity-dependence of Henry's law solubility) are brought together into a scheme implemented as a modular, extensible program in the R computing environment which is available in the supplementary online material accompanying this paper; along with input files containing solubility and structural data for ~90 gases of general interest, enabling the calculation of their total transfer velocities and component parameters. Comparison of the scheme presented here with alternative schemes and methods for calculating air-sea flux parameters shows good agreement in general. It is intended that the various components of this numerical scheme should be applied only in the absence of experimental data providing robust values for parameters for a particular gas of interest.

  6. Litigation and complaints procedures: objectives, effectiveness and alternatives.

    PubMed Central

    Whelan, C J

    1988-01-01

    Recent debates about redress mechanisms for medical accident victims have been sidetracked by fears of an American-style medical malpractice crisis. What is required is a framework within which the debate can resume. This paper proposes such a framework by focusing on the compensation and deterrence objectives and placing them in the wider context of the social costs of providing medical services. The framework is then used to assess and compare the effectiveness of differing approaches. In particular, the American and British experiences of litigation, including the concept of 'defensive medicine', are evaluated. Also discussed briefly are alternatives to court-based complaints procedures including 'no-fault' schemes, professional ethics and internal complaints mechanisms. PMID:3392721

  7. Design Mining Interacting Wind Turbines.

    PubMed

    Preen, Richard J; Bull, Larry

    2016-01-01

    An initial study has recently been presented of surrogate-assisted evolutionary algorithms used to design vertical-axis wind turbines wherein candidate prototypes are evaluated under fan-generated wind conditions after being physically instantiated by a 3D printer. Unlike other approaches, such as computational fluid dynamics simulations, no mathematical formulations were used and no model assumptions were made. This paper extends that work by exploring alternative surrogate modelling and evolutionary techniques. The accuracy of various modelling algorithms used to estimate the fitness of evaluated individuals from the initial experiments is compared. The effect of temporally windowing surrogate model training samples is explored. A surrogate-assisted approach based on an enhanced local search is introduced; and alternative coevolution collaboration schemes are examined.

  8. Mechanical Extraction of Power From Ocean Currents and Tides

    NASA Technical Reports Server (NTRS)

    Jones, Jack; Chao, Yi

    2010-01-01

    A proposed scheme for generating electric power from rivers and from ocean currents, tides, and waves is intended to offer economic and environmental advantages over prior such schemes, some of which are at various stages of implementation, others of which have not yet advanced beyond the concept stage. This scheme would be less environmentally objectionable than are prior schemes that involve the use of dams to block rivers and tidal flows. This scheme would also not entail the high maintenance costs of other proposed schemes that call for submerged electric generators and cables, which would be subject to degradation by marine growth and corrosion. A basic power-generation system according to the scheme now proposed would not include any submerged electrical equipment. The submerged portion of the system would include an all-mechanical turbine/pump unit that would superficially resemble a large land-based wind turbine (see figure). The turbine axis would turn slowly as it captured energy from the local river flow, ocean current, tidal flow, or flow from an ocean-wave device. The turbine axis would drive a pump through a gearbox to generate an enclosed flow of water, hydraulic fluid, or other suitable fluid at a relatively high pressure [typically approx.500 psi (approx.3.4 MPa)]. The pressurized fluid could be piped to an onshore or offshore facility, above the ocean surface, where it would be used to drive a turbine that, in turn, would drive an electric generator. The fluid could be recirculated between the submerged unit and the power-generation facility in a closed flow system; alternatively, if the fluid were seawater, it could be taken in from the ocean at the submerged turbine/pump unit and discharged back into the ocean from the power-generation facility. Another alternative would be to use the pressurized flow to charge an elevated reservoir or other pumped-storage facility, from whence fluid could later be released to drive a turbine/generator unit at a time of high power demand. Multiple submerged turbine/pump units could be positioned across a channel to extract more power than could be extracted by a single unit. In that case, the pressurized flows in their output pipes would be combined, via check valves, into a wider pipe that would deliver the combined flow to a power-generating or pumped-storage facility.

  9. Hydrogen production from solar energy

    NASA Technical Reports Server (NTRS)

    Eisenstadt, M. M.; Cox, K. E.

    1975-01-01

    Three alternatives for hydrogen production from solar energy have been analyzed on both efficiency and economic grounds. The analysis shows that the alternative using solar energy followed by thermochemical decomposition of water to produce hydrogen is the optimum one. The other schemes considered were the direct conversion of solar energy to electricity by silicon cells and water electrolysis, and the use of solar energy to power a vapor cycle followed by electrical generation and electrolysis. The capital cost of hydrogen via the thermochemical alternative was estimated at $575/kW of hydrogen output or $3.15/million Btu. Although this cost appears high when compared with hydrogen from other primary energy sources or from fossil fuel, environmental and social costs which favor solar energy may prove this scheme feasible in the future.

  10. A high-efficiency high-power-generation system for automobiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naidu, M.; Boules, N.; Henry, R.

    This paper presents a new scheme for the efficient generation of high electric power demanded for future automobiles. The new system consists of a permanent-magnet (PM) alternator having high-energy MAGNEQUENCH (MQ) magnets and split winding and a novel electronic voltage-regulation scheme. A proof-of-concept system, capable of providing 100/250 A (idle/cruising) at 14 V, has been built and tested in the laboratory with encouraging results. This high output is provided at 15--20 percentage points higher efficiencies than conventional automotive alternators, which translates into considerable fuel economy savings. The system is 8 dB quieter and has a rotor inertia of only 2/3more » that of an equivalent production alternator, thus allowing for a belt drive without excessive slippage.« less

  11. A high-efficiency, high power generation system for automobiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naidu, M.; Boules, N.; Henry, R.

    The paper presents a new scheme for the efficient generation of high electric power, demands for future automobiles. The new system, consists of a permanent magnet (PM) alternator having high energy MAGNEQUENCH (MQ) magnets and split winding; and a novel electronic voltage regulation scheme. A proof of concept system, capable of providing 100/250 A (idle/cruising) at 14 V, has been built and tested in the laboratory with encouraging results. This high output is provided at 15--20 percentage points higher efficiencies than conventional automotive alternators, which translates into considerable fuel economy savings. The system is 8 dB quieter and has amore » rotor inertia of only 2/3 that of an equivalent production alternator, thus allowing for a belt drive without excessive slippage.« less

  12. Lattice design for the CEPC double ring scheme

    NASA Astrophysics Data System (ADS)

    Wang, Yiwei; Su, Feng; Bai, Sha; Zhang, Yuan; Bian, Tianjian; Wang, Dou; Yu, Chenghui; Gao, Jie

    2018-01-01

    A future Circular Electron Positron Collider (CEPC) has been proposed by China with the main goal of studying the Higgs boson. Its baseline design, chosen on the basis of its performance, is a double ring scheme; an alternative design is a partial double ring scheme which reduces the budget while maintaining an adequate performance. This paper will present the collider ring lattice design for the double ring scheme. The CEPC will also work as a W and a Z factory. For the W and Z modes, except in the RF region, compatible lattices were obtained by scaling down the magnet strength with energy.

  13. Mild, moderate, meaningful? Examining the psychological and functioning correlates of DSM-5 eating disorder severity specifiers.

    PubMed

    Gianini, Loren; Roberto, Christina A; Attia, Evelyn; Walsh, B Timothy; Thomas, Jennifer J; Eddy, Kamryn T; Grilo, Carlos M; Weigel, Thomas; Sysko, Robyn

    2017-08-01

    This study evaluated the DSM-5 severity specifiers for treatment-seeking groups of participants with anorexia nervosa (AN), the purging form of bulimia nervosa (BN), and binge-eating disorder (BED). Hundred and sixty-two participants with AN, 93 participants with BN, and 343 participants with BED were diagnosed using semi-structured interviews, sub-categorized using DSM-5 severity specifiers and compared on demographic and cross-sectional clinical measures. In AN, the number of previous hospitalizations and the duration of illness increased with severity, but there was no difference across severity groups on measures of eating pathology, depression, or measures of self-reported physical or emotional functioning. In BN, the level of eating concerns increased across the severity groups, but the groups did not differ on measures of depression, self-esteem, and most eating pathology variables. In BN, support was also found for an alternative severity classification scheme based upon number of methods of purging. In BED, levels of several measures of eating pathology and self-reported physical and emotional functioning increased across the severity groups. For BED, however, support was also found for an alternative severity classification scheme based upon overvaluation of shape and weight. Preliminary evidence was also found for a transdiagnostic severity index based upon overvaluation of shape and weight. Overall, these data show limited support for the DSM-5 severity specifiers for BN and modest support for the DSM-5 severity specifiers for AN and BED. © 2017 Wiley Periodicals, Inc.

  14. V-DRASTIC: Using visualization to engage policymakers in groundwater vulnerability assessment

    NASA Astrophysics Data System (ADS)

    Bojórquez-Tapia, Luis A.; Cruz-Bello, Gustavo M.; Luna-González, Laura; Juárez, Lourdes; Ortiz-Pérez, Mario A.

    2009-06-01

    SummaryGroundwater vulnerability mapping is increasingly being used to design aquifer protection and management strategies. This paper presents a dynamic visualization method to groundwater vulnerability mapping. This method—called V-DRASTIC—extends the capacities of DRASTIC, an overlay/index technique that has been applied worldwide to evaluate the condition of hydrogeological factors and determine groundwater vulnerability at regional scales. V-DRASTIC is based upon psychophysics' principles (a theory that describes the people's response to a stimulus) to generate alternative groundwater vulnerability categorization schemes. These are used as inputs in a fuzzy pattern recognition procedure to enable planners, decision makers and stakeholders identify which scheme conveys meaningful information regarding groundwater vulnerability across a territory. V-DRASTIC was applied in the groundwater vulnerability assessment of two urban watersheds in Mexico.

  15. COLA: Optimizing Stream Processing Applications via Graph Partitioning

    NASA Astrophysics Data System (ADS)

    Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra

    In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.

  16. Is QR code an optimal data container in optical encryption systems from an error-correction coding perspective?

    PubMed

    Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia

    2018-01-01

    Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.

  17. Thermal control extravehicular life support system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The results of a comprehensive study which defined an Extravehicular Life Support System Thermal Control System (TCS) are presented. The design of the prototype hardware and a detail summary of the prototype TCS fabrication and test effort are given. Several heat rejection subsystems, water management subsystems, humidity control subsystems, pressure control schemes and temperature control schemes were evaluated. Alternative integrated TCS systems were studied, and an optimum system was selected based on quantitative weighing of weight, volume, cost, complexity and other factors. The selected subsystem contains a sublimator for heat rejection, bubble expansion tank for water management, a slurper and rotary separator for humidity control, and a pump, a temperature control valve, a gas separator and a vehicle umbilical connector for water transport. The prototype hardware complied with program objectives.

  18. An alternate lining scheme for solar ponds - Results of a liner test rig

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raman, P.; Kishore, V.V.N.

    1990-01-01

    Solar pond lining schemes consisting of combinations of clays and Low Density Polyethylene (LDPE) films have been experimentally evaluated by means of a Solar Pond Liner Test Rig. Results indicate that LDPE film sandwiched between two layers of clay can be effectively used for lining solar ponds.

  19. Analysis of a Teacher's Pedagogical Arguments Using Toulmin's Model and Argumentation Schemes

    ERIC Educational Resources Information Center

    Metaxas, N.; Potari, D.; Zachariades, T.

    2016-01-01

    In this article, we elaborate methodologies to study the argumentation speech of a teacher involved in argumentative activities. The standard tool of analysis of teachers' argumentation concerning pedagogical matters is Toulmin's model. The theory of argumentation schemes offers an alternative perspective on the analysis of arguments. We propose…

  20. Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems

    DTIC Science & Technology

    2015-05-01

    of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9...presented a cache-partitioning scheme that allows multiple tasks to share the same cache partition on a single processor (as we do for Level-A and...sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9 machine mentioned earlier, the LLC

  1. All-optical cryptography of M-QAM formats by using two-dimensional spectrally sliced keys.

    PubMed

    Abbade, Marcelo L F; Cvijetic, Milorad; Messani, Carlos A; Alves, Cleiton J; Tenenbaum, Stefan

    2015-05-10

    There has been an increased interest in enhancing the security of optical communications systems and networks. All-optical cryptography methods have been considered as an alternative to electronic data encryption. In this paper we propose and verify the use of a novel all-optical scheme based on cryptographic keys applied on the spectral signal for encryption of the M-QAM modulated data with bit rates of up to 200 gigabits per second.

  2. Translational-circular scanning for magneto-acoustic tomography with current injection.

    PubMed

    Wang, Shigang; Ma, Ren; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng

    2016-01-27

    Magneto-acoustic tomography with current injection involves using electrical impedance imaging technology. To explore the potential applications in imaging biological tissue and enhance image quality, a new scan mode for the transducer is proposed that is based on translational and circular scanning to record acoustic signals from sources. An imaging algorithm to analyze these signals is developed in respect to this alternative scanning scheme. Numerical simulations and physical experiments were conducted to evaluate the effectiveness of this scheme. An experiment using a graphite sheet as a tissue-mimicking phantom medium was conducted to verify simulation results. A pulsed voltage signal was applied across the sample, and acoustic signals were recorded as the transducer performed stepped translational or circular scans. The imaging algorithm was used to obtain an acoustic-source image based on the signals. In simulations, the acoustic-source image is correlated with the conductivity at the sample boundaries of the sample, but image results change depending on distance and angular aspect of the transducer. In general, as angle and distance decreases, the image quality improves. Moreover, experimental data confirmed the correlation. The acoustic-source images resulting from the alternative scanning mode has yielded the outline of a phantom medium. This scan mode enables improvements to be made in the sensitivity of the detecting unit and a change to a transducer array that would improve the efficiency and accuracy of acoustic-source images.

  3. Problems Associated with Grid Convergence of Functionals

    NASA Technical Reports Server (NTRS)

    Salas, Manuel D.; Atkins, Harld L.

    2008-01-01

    The current use of functionals to evaluate order-of-convergence of a numerical scheme can lead to incorrect values. The problem comes about because of interplay between the errors from the evaluation of the functional, e.g., quadrature error, and from the numerical scheme discretization. Alternative procedures for deducing the order-property of a scheme are presented. The problem is studied within the context of the inviscid supersonic flow over a blunt body; however, the problem and solutions presented are not unique to this example.

  4. On Problems Associated with Grid Convergence of Functionals

    NASA Technical Reports Server (NTRS)

    Salas, Manuael D.; Atkins, Harold L

    2009-01-01

    The current use of functionals to evaluate order-of-convergence of a numerical scheme can lead to incorrect values. The problem comes about because of interplay between the errors from the evaluation of the functional, e.g., quadrature error, and from the numerical scheme discretization. Alternative procedures for deducing the order property of a scheme are presented. The problems are studied within the context of the inviscid supersonic flow over a blunt body; however, the problems and solutions presented are not unique to this example.

  5. Towards Universal Health Coverage via Social Health Insurance in China: Systemic Fragmentation, Reform Imperatives, and Policy Alternatives.

    PubMed

    He, Alex Jingwei; Wu, Shaolong

    2017-12-01

    China's remarkable progress in building a comprehensive social health insurance (SHI) system was swift and impressive. Yet the country's decentralized and incremental approach towards universal coverage has created a fragmented SHI system under which a series of structural deficiencies have emerged with negative impacts. First, contingent on local conditions and financing capacity, benefit packages vary considerably across schemes, leading to systematic inequity. Second, the existence of multiple schemes, complicated by massive migration, has resulted in weak portability of SHI, creating further barriers to access. Third, many individuals are enrolled on multiple schemes, which causes inefficient use of government subsidies. Moral hazard and adverse selection are not effectively managed. The Chinese government announced its blueprint for integrating the urban and rural resident schemes in early 2016, paving the way for the ultimate consolidation of all SHI schemes and equal benefits for all. This article proposes three policy alternatives to inform the consolidation: (1) a single-pool system at the prefectural level with significant government subsidies, (2) a dual-pool system at the prefectural level with risk-equalization mechanisms, and (3) a household approach without merging existing pools. Vertical integration to the provincial level is unlikely to happen in the near future. Two caveats are raised to inform this transition towards universal health coverage.

  6. An agent-based model for water management and planning in the Lake Naivasha basin, Kenya

    NASA Astrophysics Data System (ADS)

    van Oel, Pieter; Mulatu, Dawit; Odongo, Vincent; Onyando, Japheth; Becht, Robert; van der Veen, Anne

    2013-04-01

    A variety of human and natural processes influence the ecological and economic state of the Lake Naivasha basin. The ecological wealth and recent economic developments in the area are strongly connected to Lake Naivasha which supports a rich variety of flora, mammal and bird species. Many human activities depend on clean freshwater from the lake whereas recently the freshwater availability of good quality is seriously influenced by water abstractions and the use of fertilizers in agriculture. Management alternatives include those aiming at limiting water abstractions and fertilizer use. A possible way to achieve reduced use of water and fertilizers is the introduction of Payment for Environmental Services (PES) schemes. As the Lake Naivasha basin and its population have experienced increasing pressures various disputes and disagreements have arisen about the processes responsible for the problems experienced, and the effectively of management alternatives. Beside conflicts of interest and disagreements on responsibilities there are serious factual disagreements. To share scientific knowledge on the effects of the socio-ecological system processes on the Lake Naivasha basin, tools may be used that expose information at temporal and spatial scales that are meaningful to stakeholders. In this study we use a spatially-explicit agent-based modelling (ABM) approach to depict the interactions between socio-economic and natural subsystems for supporting a more sustainable governance of the river basin resources. Agents consider alternative livelihood strategies and decide to go for the one they perceive as likely to be most profitable. Agents may predict and sense the availability of resources and also can observe economic performance achieved by neighbouring agents. Results are presented at the basin and subbasin level to provide relevant knowledge to Water Resources Users Associations which are important collective forums for water management through which PES schemes are managed.

  7. Kinespell: Kinesthetic Learning Activity and Assessment in a Digital Game-Based Learning Environment

    NASA Astrophysics Data System (ADS)

    Cariaga, Ada Angeli; Salvador, Jay Andrae; Solamo, Ma. Rowena; Feria, Rommel

    Various approaches in learning are commonly classified into visual, auditory and kinesthetic (VAK) learning styles. One way of addressing the VAK learning styles is through game-based learning which motivates learners pursue knowledge holistically. The paper presents Kinespell, an unconventional method of learning through digital game-based learning. Kinespell is geared towards enhancing not only the learner’s spelling abilities but also the motor skills through utilizing wireless controllers. It monitors player’s performance through integrated assessment scheme. Results show that Kinespell may accommodate the VAK learning styles and is a promising alternative to established methods in learning and assessing students’ performance in Spelling.

  8. Feasibility of community-based health insurance in rural tropical Ecuador.

    PubMed

    Eckhardt, Martin; Forsberg, Birger Carl; Wolf, Dorothee; Crespo-Burgos, Antonio

    2011-03-01

    The main objective of this study was to assess people's willingness to join a community-based health insurance (CHI) model in El Páramo, a rural area in Ecuador, and to determine factors influencing this willingness. A second objective was to identify people's understanding and attitudes toward the presented CHI model. A cross-sectional survey was carried out using a structured questionnaire. Of an estimated 829 households, 210 were randomly selected by two-stage cluster sampling. Attitudes toward the scheme were assessed. Information on factors possibly influencing willingness to join was collected and related to the willingness to join. To gain an insight into a respondent's possible ability to pay, health care expenditure on the last illness episode was assessed. Feasibility was defined as at least 50% of household heads willing to join the scheme. Willingness to join the CHI model for US$30 per year was 69.3%. With affiliation, 92.2% of interviewees stated that they would visit the local health facility more often. Willingness to join was found to be negatively associated with education. Other variables showed no significant association with willingness to join. The study showed a positive attitude toward the CHI scheme. Substantial health care expenditures on the last illness episode were documented. The investigation concludes that CHI in the study region is feasible. However, enrollments are likely to be lower than the stated willingness to join. Still, a CHI scheme should present an interesting financing alternative in rural areas where services are scarce and difficult to sustain.

  9. Method and apparatus for configuration control of redundant robots

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1991-01-01

    A method and apparatus to control a robot or manipulator configuration over the entire motion based on augmentation of the manipulator forward kinematics is disclosed. A set of kinematic functions is defined in Cartesian or joint space to reflect the desirable configuration that will be achieved in addition to the specified end-effector motion. The user-defined kinematic functions and the end-effector Cartesian coordinates are combined to form a set of task-related configuration variables as generalized coordinates for the manipulator. A task-based adaptive scheme is then utilized to directly control the configuration variables so as to achieve tracking of some desired reference trajectories throughout the robot motion. This accomplishes the basic task of desired end-effector motion, while utilizing the redundancy to achieve any additional task through the desired time variation of the kinematic functions. The present invention can also be used for optimization of any kinematic objective function, or for satisfaction of a set of kinematic inequality constraints, as in an obstacle avoidance problem. In contrast to pseudoinverse-based methods, the configuration control scheme ensures cyclic motion of the manipulator, which is an essential requirement for repetitive operations. The control law is simple and computationally very fast, and does not require either the complex manipulator dynamic model or the complicated inverse kinematic transformation. The configuration control scheme can alternatively be implemented in joint space.

  10. Alternative Line Coding Scheme with Fixed Dimming for Visible Light Communication

    NASA Astrophysics Data System (ADS)

    Niaz, M. T.; Imdad, F.; Kim, H. S.

    2017-01-01

    An alternative line coding scheme called fixed-dimming on/off keying (FD-OOK) is proposed for visible-light communication (VLC). FD-OOK reduces the flickering caused by a VLC transmitter and can maintain a 50% dimming level. Simple encoder and decoder are proposed which generates codes where the number of bits representing one is same as the number of bits representing zero. By keeping the number of ones and zeros equal the change in the brightness of lighting may be minimized and kept constant at 50%, thereby reducing the flickering in VLC. The performance of FD-OOK is analysed with two parameters: the spectral efficiency and power requirement.

  11. Optimization of wastewater treatment alternative selection by hierarchy grey relational analysis.

    PubMed

    Zeng, Guangming; Jiang, Ru; Huang, Guohe; Xu, Min; Li, Jianbing

    2007-01-01

    This paper describes an innovative systematic approach, namely hierarchy grey relational analysis for optimal selection of wastewater treatment alternatives, based on the application of analytic hierarchy process (AHP) and grey relational analysis (GRA). It can be applied for complicated multicriteria decision-making to obtain scientific and reasonable results. The effectiveness of this approach was verified through a real case study. Four wastewater treatment alternatives (A(2)/O, triple oxidation ditch, anaerobic single oxidation ditch and SBR) were evaluated and compared against multiple economic, technical and administrative performance criteria, including capital cost, operation and maintenance (O and M) cost, land area, removal of nitrogenous and phosphorous pollutants, sludge disposal effect, stability of plant operation, maturity of technology and professional skills required for O and M. The result illustrated that the anaerobic single oxidation ditch was the optimal scheme and would obtain the maximum general benefits for the wastewater treatment plant to be constructed.

  12. Performance Testing of Thermal Interface Filler Materials in a Bolted Aluminum Interface Under Thermal/Vacuum Conditions

    NASA Technical Reports Server (NTRS)

    Glasgow, S. D.; Kittredge, K. B.

    2003-01-01

    A thermal interface material is one of the many tools often used as part of the thermal control scheme for space-based applications. Historically, at Marshall Space Flight Center, CHO-THERM 1671 has primarily been used for applications where an interface material was deemed necessary. However, numerous alternatives have come on the market in recent years. It was decided that a number of these materials should be tested against each other to see if there were better performing alternatives. The tests were done strictly to compare the thermal performance of the materials relative to each other under repeatable conditions and do not take into consideration other design issues, such as off-gassing, electrical conduction, isolation, etc. The purpose of this Technical Memorandum is to detail the materials tested, test apparatus, procedures, and results of these tests. The results show that there are a number of better performing alternatives now available.

  13. Efficient Trajectory Options Allocation for the Collaborative Trajectory Options Program

    NASA Technical Reports Server (NTRS)

    Rodionova, Olga; Arneson, Heather; Sridhar, Banavar; Evans, Antony

    2017-01-01

    The Collaborative Trajectory Options Program (CTOP) is a Traffic Management Initiative (TMI) intended to control the air traffic flow rates at multiple specified Flow Constrained Areas (FCAs), where demand exceeds capacity. CTOP allows flight operators to submit the desired Trajectory Options Set (TOS) for each affected flight with associated Relative Trajectory Cost (RTC) for each option. CTOP then creates a feasible schedule that complies with capacity constraints by assigning affected flights with routes and departure delays in such a way as to minimize the total cost while maintaining equity across flight operators. The current version of CTOP implements a Ration-by-Schedule (RBS) scheme, which assigns the best available options to flights based on a First-Scheduled-First-Served heuristic. In the present study, an alternative flight scheduling approach is developed based on linear optimization. Results suggest that such an approach can significantly reduce flight delays, in the deterministic case, while maintaining equity as defined using a Max-Min fairness scheme.

  14. Improved field free line magnetic particle imaging using saddle coils.

    PubMed

    Erbe, Marlitt; Sattel, Timo F; Buzug, Thorsten M

    2013-12-01

    Magnetic particle imaging (MPI) is a novel tracer-based imaging method detecting the distribution of superparamagnetic iron oxide (SPIO) nanoparticles in vivo in three dimensions and in real time. Conventionally, MPI uses the signal emitted by SPIO tracer material located at a field free point (FFP). To increase the sensitivity of MPI, however, an alternative encoding scheme collecting the particle signal along a field free line (FFL) was proposed. To provide the magnetic fields needed for line imaging in MPI, a very efficient scanner setup regarding electrical power consumption is needed. At the same time, the scanner needs to provide a high magnetic field homogeneity along the FFL as well as parallel to its alignment to prevent the appearance of artifacts, using efficient radon-based reconstruction methods arising for a line encoding scheme. This work presents a dynamic FFL scanner setup for MPI that outperforms all previously presented setups in electrical power consumption as well as magnetic field quality.

  15. Fourier-interpolation superresolution optical fluctuation imaging (fSOFi) (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Enderlein, Joerg; Stein, Simon C.; Huss, Anja; Hähnel, Dirk; Gregor, Ingo

    2016-02-01

    Stochastic Optical Fluctuation Imaging (SOFI) is a superresolution fluorescence microscopy technique which allows to enhance the spatial resolution of an image by evaluating the temporal fluctuations of blinking fluorescent emitters. SOFI is not based on the identification and localization of single molecules such as in the widely used Photoactivation Localization Microsopy (PALM) or Stochastic Optical Reconstruction Microscopy (STORM), but computes a superresolved image via temporal cumulants from a recorded movie. A technical challenge hereby is that, when directly applying the SOFI algorithm to a movie of raw images, the pixel size of the final SOFI image is the same as that of the original images, which becomes problematic when the final SOFI resolution is much smaller than this value. In the past, sophisticated cross-correlation schemes have been used for tackling this problem. Here, we present an alternative, exact, straightforward, and simple solution using an interpolation scheme based on Fourier transforms. We exemplify the method on simulated and experimental data.

  16. Methodology of ecooriented assessment of constructive schemes of cast in-situ RC framework in civil engineering

    NASA Astrophysics Data System (ADS)

    Avilova, I. P.; Krutilova, M. O.

    2018-01-01

    Economic growth is the main determinant of the trend to increased greenhouse gas (GHG) emission. Therefore, the reduction of emission and stabilization of GHG levels in the atmosphere become an urgent task to avoid the worst predicted consequences of climate change. GHG emissions in construction industry take a significant part of industrial GHG emission and are expected to consistently increase. The problem could be successfully solved with a help of both economical and organizational restrictions, based on enhanced algorithms of calculation and amercement of environmental harm in building industry. This study aims to quantify of GHG emission caused by different constructive schemes of RC framework in concrete casting. The result shows that proposed methodology allows to make a comparative analysis of alternative projects in residential housing, taking into account an environmental damage, caused by construction process. The study was carried out in the framework of the Program of flagship university development on the base of Belgorod State Technological University named after V.G. Shoukhov

  17. Classifying quantum entanglement through topological links

    NASA Astrophysics Data System (ADS)

    Quinta, Gonçalo M.; André, Rui

    2018-04-01

    We propose an alternative classification scheme for quantum entanglement based on topological links. This is done by identifying a nonrigid ring to a particle, attributing the act of cutting and removing a ring to the operation of tracing out the particle, and associating linked rings to entangled particles. This analogy naturally leads us to a classification of multipartite quantum entanglement based on all possible distinct links for a given number of rings. To determine all different possibilities, we develop a formalism that associates any link to a polynomial, with each polynomial thereby defining a distinct equivalence class. To demonstrate the use of this classification scheme, we choose qubit quantum states as our example of physical system. A possible procedure to obtain qubit states from the polynomials is also introduced, providing an example state for each link class. We apply the formalism for the quantum systems of three and four qubits and demonstrate the potential of these tools in a context of qubit networks.

  18. Composite scheme using localized relaxation with non-standard finite difference method for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Kumar, Vivek; Raghurama Rao, S. V.

    2008-04-01

    Non-standard finite difference methods (NSFDM) introduced by Mickens [ Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers-Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791-797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250-2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235-276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter ( λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.

  19. γ5 in the four-dimensional helicity scheme

    NASA Astrophysics Data System (ADS)

    Gnendiger, C.; Signer, A.

    2018-05-01

    We investigate the regularization-scheme dependent treatment of γ5 in the framework of dimensional regularization, mainly focusing on the four-dimensional helicity scheme (fdh). Evaluating distinctive examples, we find that for one-loop calculations, the recently proposed four-dimensional formulation (fdf) of the fdh scheme constitutes a viable and efficient alternative compared to more traditional approaches. In addition, we extend the considerations to the two-loop level and compute the pseudoscalar form factors of quarks and gluons in fdh. We provide the necessary operator renormalization and discuss at a practical level how the complexity of intermediate calculational steps can be reduced in an efficient way.

  20. Application of artificial neural networks in nonlinear analysis of trusses

    NASA Technical Reports Server (NTRS)

    Alam, J.; Berke, L.

    1991-01-01

    A method is developed to incorporate neural network model based upon the Backpropagation algorithm for material response into nonlinear elastic truss analysis using the initial stiffness method. Different network configurations are developed to assess the accuracy of neural network modeling of nonlinear material response. In addition to this, a scheme based upon linear interpolation for material data, is also implemented for comparison purposes. It is found that neural network approach can yield very accurate results if used with care. For the type of problems under consideration, it offers a viable alternative to other material modeling methods.

  1. Hessian-based norm regularization for image restoration with biomedical applications.

    PubMed

    Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael

    2012-03-01

    We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.

  2. A suffix arrays based approach to semantic search in P2P systems

    NASA Astrophysics Data System (ADS)

    Shi, Qingwei; Zhao, Zheng; Bao, Hu

    2007-09-01

    Building a semantic search system on top of peer-to-peer (P2P) networks is becoming an attractive and promising alternative scheme for the reason of scalability, Data freshness and search cost. In this paper, we present a Suffix Arrays based algorithm for Semantic Search (SASS) in P2P systems, which generates a distributed Semantic Overlay Network (SONs) construction for full-text search in P2P networks. For each node through the P2P network, SASS distributes document indices based on a set of suffix arrays, by which clusters are created depending on words or phrases shared between documents, therefore, the search cost for a given query is decreased by only scanning semantically related documents. In contrast to recently announced SONs scheme designed by using metadata or predefined-class, SASS is an unsupervised approach for decentralized generation of SONs. SASS is also an incremental, linear time algorithm, which efficiently handle the problem of nodes update in P2P networks. Our simulation results demonstrate that SASS yields high search efficiency in dynamic environments.

  3. Alternative indicators for measuring hospital productivity.

    PubMed

    Serway, G D; Strum, D W; Haug, W F

    1987-08-01

    This article explores the premise that the appropriateness and usefulness of typical hospital productivity measures have been affected by three changes in delivery: Organizational restructuring and other definition and data source changes that make full-time equivalent employee (FTE) measurements ambiguous. Transition to prospective payment (diagnosis-related groups). Increase in capitation (prepaid, at risk) programs. The effects of these changes on productivity management indicate the need for alternative productivity indicators. Several productivity measures that complement these changes in internal operations and the external hospital business environment are presented. These are based on an analysis of four hospitals within a multihospital system, and an illustration and interpretation of an array of measures, based on ten months of actual data, is provided. In conclusion, the recommendation is made for hospital management to collect an expanded set of productivity measures and review them in light of changing expense and revenue management schemes inherent in new payment modes.

  4. Fractional Steps methods for transient problems on commodity computer architectures

    NASA Astrophysics Data System (ADS)

    Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.

    2008-12-01

    Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.

  5. Proposing a new iterative learning control algorithm based on a non-linear least square formulation - Minimising draw-in errors

    NASA Astrophysics Data System (ADS)

    Endelt, B.

    2017-09-01

    Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.

  6. A novel two-stage evaluation system based on a Group-G1 approach to identify appropriate emergency treatment technology schemes in sudden water source pollution accidents.

    PubMed

    Qu, Jianhua; Meng, Xianlin; Hu, Qi; You, Hong

    2016-02-01

    Sudden water source pollution resulting from hazardous materials has gradually become a major threat to the safety of the urban water supply. Over the past years, various treatment techniques have been proposed for the removal of the pollutants to minimize the threat of such pollutions. Given the diversity of techniques available, the current challenge is how to scientifically select the most desirable alternative for different threat degrees. Therefore, a novel two-stage evaluation system was developed based on a circulation-correction improved Group-G1 method to determine the optimal emergency treatment technology scheme, considering the areas of contaminant elimination in both drinking water sources and water treatment plants. In stage 1, the threat degree caused by the pollution was predicted using a threat evaluation index system and was subdivided into four levels. Then, a technique evaluation index system containing four sets of criteria weights was constructed in stage 2 to obtain the optimum treatment schemes corresponding to the different threat levels. The applicability of the established evaluation system was tested by a practical cadmium-contaminated accident that occurred in 2012. The results show this system capable of facilitating scientific analysis in the evaluation and selection of emergency treatment technologies for drinking water source security.

  7. Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2015-11-01

    The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.

  8. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    2015-06-15

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less

  9. Encoding and decoding of digital spiral imaging based on bidirectional transformation of light's spatial eigenmodes.

    PubMed

    Zhang, Wuhong; Chen, Lixiang

    2016-06-15

    Digital spiral imaging has been demonstrated as an effective optical tool to encode optical information and retrieve topographic information of an object. Here we develop a conceptually new and concise scheme for optical image encoding and decoding toward free-space digital spiral imaging. We experimentally demonstrate that the optical lattices with ℓ=±50 orbital angular momentum superpositions and a clover image with nearly 200 Laguerre-Gaussian (LG) modes can be well encoded and successfully decoded. It is found that an image encoded/decoded with a two-index LG spectrum (considering both azimuthal and radial indices, ℓ and p) possesses much higher fidelity than that with a one-index LG spectrum (only considering the ℓ index). Our work provides an alternative tool for the image encoding/decoding scheme toward free-space optical communications.

  10. Optical image encryption using multilevel Arnold transform and noninterferometric imaging

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Chen, Xudong

    2011-11-01

    Information security has attracted much current attention due to the rapid development of modern technologies, such as computer and internet. We propose a novel method for optical image encryption using multilevel Arnold transform and rotatable-phase-mask noninterferometric imaging. An optical image encryption scheme is developed in the gyrator transform domain, and one phase-only mask (i.e., phase grating) is rotated and updated during image encryption. For the decryption, an iterative retrieval algorithm is proposed to extract high-quality plaintexts. Conventional encoding methods (such as digital holography) have been proven vulnerably to the attacks, and the proposed optical encoding scheme can effectively eliminate security deficiency and significantly enhance cryptosystem security. The proposed strategy based on the rotatable phase-only mask can provide a new alternative for data/image encryption in the noninterferometric imaging.

  11. Quantum cryptography with entangled photons

    PubMed

    Jennewein; Simon; Weihs; Weinfurter; Zeilinger

    2000-05-15

    By realizing a quantum cryptography system based on polarization entangled photon pairs we establish highly secure keys, because a single photon source is approximated and the inherent randomness of quantum measurements is exploited. We implement a novel key distribution scheme using Wigner's inequality to test the security of the quantum channel, and, alternatively, realize a variant of the BB84 protocol. Our system has two completely independent users separated by 360 m, and generates raw keys at rates of 400-800 bits/s with bit error rates around 3%.

  12. Simple proof of the quantum benchmark fidelity for continuous-variable quantum devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Namiki, Ryo

    2011-04-15

    An experimental success criterion for continuous-variable quantum teleportation and memory is to surpass the limit of the average fidelity achieved by classical measure-and-prepare schemes with respect to a Gaussian-distributed set of coherent states. We present an alternative proof of the classical limit based on the familiar notions of state-channel duality and partial transposition. The present method enables us to produce a quantum-domain criterion associated with a given set of measured fidelities.

  13. Bio-inspired adaptive feedback error learning architecture for motor control.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo

    2012-10-01

    This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).

  14. Alternative difference analysis scheme combining R -space EXAFS fit with global optimization XANES fit for X-ray transient absorption spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhan, Fei; Tao, Ye; Zhao, Haifeng

    Time-resolved X-ray absorption spectroscopy (TR-XAS), based on the laser-pump/X-ray-probe method, is powerful in capturing the change of the geometrical and electronic structure of the absorbing atom upon excitation. TR-XAS data analysis is generally performed on the laser-on minus laser-off difference spectrum. Here, a new analysis scheme is presented for the TR-XAS difference fitting in both the extended X-ray absorption fine-structure (EXAFS) and the X-ray absorption near-edge structure (XANES) regions.R-space EXAFS difference fitting could quickly provide the main quantitative structure change of the first shell. The XANES fitting part introduces a global non-derivative optimization algorithm and optimizes the local structure changemore » in a flexible way where both the core XAS calculation package and the search method in the fitting shell are changeable. The scheme was applied to the TR-XAS difference analysis of Fe(phen) 3spin crossover complex and yielded reliable distance change and excitation population.« less

  15. Antibiotic removal from water: A highly efficient silver phosphate-based Z-scheme photocatalytic system under natural solar light.

    PubMed

    Wang, Jiajia; Chen, Hui; Tang, Lin; Zeng, Guangming; Liu, Yutang; Yan, Ming; Deng, Yaocheng; Feng, Haopeng; Yu, Jiangfang; Wang, Longlu

    2018-10-15

    Photocatalytic degradation is an alternative method to remove pharmaceutical compounds from water, however it is hard to achieve efficient rate because of the low efficiency of photocatalysts. In this study, an efficient Z-Scheme photocatalyst was constructed by integrating graphitic carbon nitride (CN) and reduced graphene oxide (rGO) with AP via a simple facile precipitation method. Excitedly, ternary AP/rGO/CN composite showed superior photocatalytic and anti-photocorrosion performances under both intense sunlight and weak indoor light irradiation. NOF can be completely degraded in only 30 min and about 85% of NOF can be mineralized after 2 h irradiation under intensive sunlight irradiation. rGO could work not only as a sheltering layer to protect AP from photocorrosion but also as a mediator for Z-Scheme electron transport, which can protect AP from the photoreduction. This strategy could be a promising method to construct photocatalytic system with high efficiency for the removal of antibiotics under natural light irradiation. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Bidirectional fiber-wireless and fiber-IVLLC integrated system based on polarization-orthogonal modulation scheme.

    PubMed

    Lu, Hai-Han; Li, Chung-Yi; Chen, Hwan-Wei; Ho, Chun-Ming; Cheng, Ming-Te; Huang, Sheng-Jhe; Yang, Zih-Yi; Lin, Xin-Yao

    2016-07-25

    A bidirectional fiber-wireless and fiber-invisible laser light communication (IVLLC) integrated system that employs polarization-orthogonal modulation scheme for hybrid cable television (CATV)/microwave (MW)/millimeter-wave (MMW)/baseband (BB) signal transmission is proposed and demonstrated. To our knowledge, it is the first one that adopts a polarization-orthogonal modulation scheme in a bidirectional fiber-wireless and fiber-IVLLC integrated system with hybrid CATV/MW/MMW/BB signal. For downlink transmission, carrier-to-noise ratio (CNR), composite second-order (CSO), composite triple-beat (CTB), and bit error rate (BER) perform well over 40-km single-mode fiber (SMF) and 10-m RF/50-m optical wireless transport scenarios. For uplink transmission, good BER performance is obtained over 40-km SMF and 50-m optical wireless transport scenario. Such a bidirectional fiber-wireless and fiber-IVLLC integrated system for hybrid CATV/MW/MMW/BB signal transmission will be an attractive alternative for providing broadband integrated services, including CATV, Internet, and telecommunication services. It is shown to be a prominent one to present the advancements for the convergence of fiber backbone and RF/optical wireless feeder.

  17. Alternative difference analysis scheme combining R-space EXAFS fit with global optimization XANES fit for X-ray transient absorption spectroscopy.

    PubMed

    Zhan, Fei; Tao, Ye; Zhao, Haifeng

    2017-07-01

    Time-resolved X-ray absorption spectroscopy (TR-XAS), based on the laser-pump/X-ray-probe method, is powerful in capturing the change of the geometrical and electronic structure of the absorbing atom upon excitation. TR-XAS data analysis is generally performed on the laser-on minus laser-off difference spectrum. Here, a new analysis scheme is presented for the TR-XAS difference fitting in both the extended X-ray absorption fine-structure (EXAFS) and the X-ray absorption near-edge structure (XANES) regions. R-space EXAFS difference fitting could quickly provide the main quantitative structure change of the first shell. The XANES fitting part introduces a global non-derivative optimization algorithm and optimizes the local structure change in a flexible way where both the core XAS calculation package and the search method in the fitting shell are changeable. The scheme was applied to the TR-XAS difference analysis of Fe(phen) 3 spin crossover complex and yielded reliable distance change and excitation population.

  18. Chance-Constrained AC Optimal Power Flow: Reformulations and Efficient Algorithms

    DOE PAGES

    Roald, Line Alnaes; Andersson, Goran

    2017-08-29

    Higher levels of renewable electricity generation increase uncertainty in power system operation. To ensure secure system operation, new tools that account for this uncertainty are required. Here, in this paper, we adopt a chance-constrained AC optimal power flow formulation, which guarantees that generation, power flows and voltages remain within their bounds with a pre-defined probability. We then discuss different chance-constraint reformulations and solution approaches for the problem. Additionally, we first discuss an analytical reformulation based on partial linearization, which enables us to obtain a tractable representation of the optimization problem. We then provide an efficient algorithm based on an iterativemore » solution scheme which alternates between solving a deterministic AC OPF problem and assessing the impact of uncertainty. This more flexible computational framework enables not only scalable implementations, but also alternative chance-constraint reformulations. In particular, we suggest two sample based reformulations that do not require any approximation or relaxation of the AC power flow equations.« less

  19. SU-D-210-03: Limited-View Multi-Source Quantitative Photoacoustic Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, J; Gao, H

    2015-06-15

    Purpose: This work is to investigate a novel limited-view multi-source acquisition scheme for the direct and simultaneous reconstruction of optical coefficients in quantitative photoacoustic tomography (QPAT), which has potentially improved signal-to-noise ratio and reduced data acquisition time. Methods: Conventional QPAT is often considered in two steps: first to reconstruct the initial acoustic pressure from the full-view ultrasonic data after each optical illumination, and then to quantitatively reconstruct optical coefficients (e.g., absorption and scattering coefficients) from the initial acoustic pressure, using multi-source or multi-wavelength scheme.Based on a novel limited-view multi-source scheme here, We have to consider the direct reconstruction of opticalmore » coefficients from the ultrasonic data, since the initial acoustic pressure can no longer be reconstructed as an intermediate variable due to the incomplete acoustic data in the proposed limited-view scheme. In this work, based on a coupled photo-acoustic forward model combining diffusion approximation and wave equation, we develop a limited-memory Quasi-Newton method (LBFGS) for image reconstruction that utilizes the adjoint forward problem for fast computation of gradients. Furthermore, the tensor framelet sparsity is utilized to improve the image reconstruction which is solved by Alternative Direction Method of Multipliers (ADMM). Results: The simulation was performed on a modified Shepp-Logan phantom to validate the feasibility of the proposed limited-view scheme and its corresponding image reconstruction algorithms. Conclusion: A limited-view multi-source QPAT scheme is proposed, i.e., the partial-view acoustic data acquisition accompanying each optical illumination, and then the simultaneous rotations of both optical sources and ultrasonic detectors for next optical illumination. Moreover, LBFGS and ADMM algorithms are developed for the direct reconstruction of optical coefficients from the acoustic data. Jing Feng and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  20. Comparing the performance of flat and hierarchical Habitat/Land-Cover classification models in a NATURA 2000 site

    NASA Astrophysics Data System (ADS)

    Gavish, Yoni; O'Connell, Jerome; Marsh, Charles J.; Tarantino, Cristina; Blonda, Palma; Tomaselli, Valeria; Kunin, William E.

    2018-02-01

    The increasing need for high quality Habitat/Land-Cover (H/LC) maps has triggered considerable research into novel machine-learning based classification models. In many cases, H/LC classes follow pre-defined hierarchical classification schemes (e.g., CORINE), in which fine H/LC categories are thematically nested within more general categories. However, none of the existing machine-learning algorithms account for this pre-defined hierarchical structure. Here we introduce a novel Random Forest (RF) based application of hierarchical classification, which fits a separate local classification model in every branching point of the thematic tree, and then integrates all the different local models to a single global prediction. We applied the hierarchal RF approach in a NATURA 2000 site in Italy, using two land-cover (CORINE, FAO-LCCS) and one habitat classification scheme (EUNIS) that differ from one another in the shape of the class hierarchy. For all 3 classification schemes, both the hierarchical model and a flat model alternative provided accurate predictions, with kappa values mostly above 0.9 (despite using only 2.2-3.2% of the study area as training cells). The flat approach slightly outperformed the hierarchical models when the hierarchy was relatively simple, while the hierarchical model worked better under more complex thematic hierarchies. Most misclassifications came from habitat pairs that are thematically distant yet spectrally similar. In 2 out of 3 classification schemes, the additional constraints of the hierarchical model resulted with fewer such serious misclassifications relative to the flat model. The hierarchical model also provided valuable information on variable importance which can shed light into "black-box" based machine learning algorithms like RF. We suggest various ways by which hierarchical classification models can increase the accuracy and interpretability of H/LC classification maps.

  1. Particulate Photocatalyst Sheets Based on Carbon Conductor Layer for Efficient Z-Scheme Pure-Water Splitting at Ambient Pressure.

    PubMed

    Wang, Qian; Hisatomi, Takashi; Suzuki, Yohichi; Pan, Zhenhua; Seo, Jeongsuk; Katayama, Masao; Minegishi, Tsutomu; Nishiyama, Hiroshi; Takata, Tsuyoshi; Seki, Kazuhiko; Kudo, Akihiko; Yamada, Taro; Domen, Kazunari

    2017-02-01

    Development of sunlight-driven water splitting systems with high efficiency, scalability, and cost-competitiveness is a central issue for mass production of solar hydrogen as a renewable and storable energy carrier. Photocatalyst sheets comprising a particulate hydrogen evolution photocatalyst (HEP) and an oxygen evolution photocatalyst (OEP) embedded in a conductive thin film can realize efficient and scalable solar hydrogen production using Z-scheme water splitting. However, the use of expensive precious metal thin films that also promote reverse reactions is a major obstacle to developing a cost-effective process at ambient pressure. In this study, we present a standalone particulate photocatalyst sheet based on an earth-abundant, relatively inert, and conductive carbon film for efficient Z-scheme water splitting at ambient pressure. A SrTiO 3 :La,Rh/C/BiVO 4 :Mo sheet is shown to achieve unassisted pure-water (pH 6.8) splitting with a solar-to-hydrogen energy conversion efficiency (STH) of 1.2% at 331 K and 10 kPa, while retaining 80% of this efficiency at 91 kPa. The STH value of 1.0% is the highest among Z-scheme pure water splitting operating at ambient pressure. The working mechanism of the photocatalyst sheet is discussed on the basis of band diagram simulation. In addition, the photocatalyst sheet split pure water more efficiently than conventional powder suspension systems and photoelectrochemical parallel cells because H + and OH - concentration overpotentials and an IR drop between the HEP and OEP were effectively suppressed. The proposed carbon-based photocatalyst sheet, which can be used at ambient pressure, is an important alternative to (photo)electrochemical systems for practical solar hydrogen production.

  2. Hybrid simulation combining two space-time discretization of the discrete-velocity Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Horstmann, Jan Tobias; Le Garrec, Thomas; Mincu, Daniel-Ciprian; Lévêque, Emmanuel

    2017-11-01

    Despite the efficiency and low dissipation of the stream-collide scheme of the discrete-velocity Boltzmann equation, which is nowadays implemented in many lattice Boltzmann solvers, a major drawback exists over alternative discretization schemes, i.e. finite-volume or finite-difference, that is the limitation to Cartesian uniform grids. In this paper, an algorithm is presented that combines the positive features of each scheme in a hybrid lattice Boltzmann method. In particular, the node-based streaming of the distribution functions is coupled with a second-order finite-volume discretization of the advection term of the Boltzmann equation under the Bhatnagar-Gross-Krook approximation. The algorithm is established on a multi-domain configuration, with the individual schemes being solved on separate sub-domains and connected by an overlapping interface of at least 2 grid cells. A critical parameter in the coupling is the CFL number equal to unity, which is imposed by the stream-collide algorithm. Nevertheless, a semi-implicit treatment of the collision term in the finite-volume formulation allows us to obtain a stable solution for this condition. The algorithm is validated in the scope of three different test cases on a 2D periodic mesh. It is shown that the accuracy of the combined discretization schemes agrees with the order of each separate scheme involved. The overall numerical error of the hybrid algorithm in the macroscopic quantities is contained between the error of the two individual algorithms. Finally, we demonstrate how such a coupling can be used to adapt to anisotropic flows with some gradual mesh refinement in the FV domain.

  3. A study of alternative schemes for extrapolation of secular variation at observatories

    USGS Publications Warehouse

    Alldredge, L.R.

    1976-01-01

    The geomagnetic secular variation is not well known. This limits the useful life of geomagnetic models. The secular variation is usually assumed to be linear with time. It is found that attenative schemes that employ quasiperiodic variations from internal and external sources can improve the extrapolation of secular variation at high-quality observatories. Although the schemes discussed are not yet fully applicable in worldwide model making, they do suggest some basic ideas that may be developed into useful tools in future model work. ?? 1976.

  4. Using concatenated quantum codes for universal fault-tolerant quantum gates.

    PubMed

    Jochym-O'Connor, Tomas; Laflamme, Raymond

    2014-01-10

    We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.

  5. PAH concentrations simulated with the AURAMS-PAH chemical transport model over Canada and the USA

    NASA Astrophysics Data System (ADS)

    Galarneau, E.; Makar, P. A.; Zheng, Q.; Narayan, J.; Zhang, J.; Moran, M. D.; Bari, M. A.; Pathela, S.; Chen, A.; Chlumsky, R.

    2013-07-01

    The off-line Eulerian AURAMS chemical transport model was adapted to simulate the atmospheric fate of seven PAHs: phenanthrene, anthracene, fluoranthene, pyrene, benz[a]anthracene, chrysene + triphenylene, and benzo[a]pyrene. The model was then run for the year 2002 with hourly output on a~grid covering southern Canada and the continental USA with 42 km horizontal grid spacing. Model predictions were compared to ~ 5000 24 h average PAH measurements from 45 sites, eight of which also provided data on particle/gas partitioning which had been modelled using two alternative schemes. This is the first known regional modelling study for PAHs over a North American domain and the first modelling study at any scale to compare alternative particle/gas partitioning schemes against paired field measurements. Annual average modelled total (gas + particle) concentrations were statistically indistinguishable from measured values for fluoranthene, pyrene and benz[a]anthracene whereas the model underestimated concentrations of phenanthrene, anthracene and chrysene + triphenylene. Significance for benzo[a]pyrene performance was close to the statistical threshold and depended on the particle/gas partitioning scheme employed. On a day-to-day basis, the model simulated total PAH concentrations to the correct order of magnitude the majority of the time. Model performance differed substantially between measurement locations and the limited available evidence suggests that the model spatial resolution was too coarse to capture the distribution of concentrations in densely populated areas. A more detailed analysis of the factors influencing modelled particle/gas partitioning is warranted based on the findings in this study.

  6. Aircraft interior noise reduction by alternate resonance tuning

    NASA Technical Reports Server (NTRS)

    Bliss, Donald B.; Gottwald, James A.; Gustaveson, Mark B.; Burton, James R., III

    1988-01-01

    Model problem development and analysis continues with the Alternate Resonance Tuning (ART) concept. The various topics described are presently at different stages of completion: investigation of the effectiveness of the ART concept under an external propagating pressure field associated with propeller passage by the fuselage; analysis of ART performance with a double panel wall mounted in a flexible frame model; development of a data fitting scheme using a branch analysis with a Newton-Raphson scheme in multiple dimensions to determine values of critical parameters in the actual experimental apparatus; and investigation of the ART effect with real panels as opposed to the spring-mass-damper systems currently used in much of the theory.

  7. Immunity-based detection, identification, and evaluation of aircraft sub-system failures

    NASA Astrophysics Data System (ADS)

    Moncayo, Hever Y.

    This thesis describes the design, development, and flight-simulation testing of an integrated Artificial Immune System (AIS) for detection, identification, and evaluation of a wide variety of sensor, actuator, propulsion, and structural failures/damages including the prediction of the achievable states and other limitations on performance and handling qualities. The AIS scheme achieves high detection rate and low number of false alarms for all the failure categories considered. Data collected using a motion-based flight simulator are used to define the self for an extended sub-region of the flight envelope. The NASA IFCS F-15 research aircraft model is used and represents a supersonic fighter which include model following adaptive control laws based on non-linear dynamic inversion and artificial neural network augmentation. The flight simulation tests are designed to analyze and demonstrate the performance of the immunity-based aircraft failure detection, identification and evaluation (FDIE) scheme. A general robustness analysis is also presented by determining the achievable limits for a desired performance in the presence of atmospheric perturbations. For the purpose of this work, the integrated AIS scheme is implemented based on three main components. The first component performs the detection when one of the considered failures is present in the system. The second component consists in the identification of the failure category and the classification according to the failed element. During the third phase a general evaluation of the failure is performed with the estimation of the magnitude/severity of the failure and the prediction of its effect on reducing the flight envelope of the aircraft system. Solutions and alternatives to specific design issues of the AIS scheme, such as data clustering and empty space optimization, data fusion and duplication removal, definition of features, dimensionality reduction, and selection of cluster/detector shape are also analyzed in this thesis. They showed to have an important effect on detection performance and are a critical aspect when designing the configuration of the AIS. The results presented in this thesis show that the AIS paradigm addresses directly the complexity and multi-dimensionality associated with a damaged aircraft dynamic response and provides the tools necessary for a comprehensive/integrated solution to the FDIE problem. Excellent detection, identification, and evaluation performance has been recorded for all types of failures considered. The implementation of the proposed AIS-based scheme can potentially have a significant impact on the safety of aircraft operation. The output information obtained from the scheme will be useful to increase pilot situational awareness and determine automated compensation.

  8. What Is the Value of Value-Based Purchasing?

    PubMed

    Tanenbaum, Sandra J

    2016-10-01

    Value-based purchasing (VBP) is a widely favored strategy for improving the US health care system. The meaning of value that predominates in VBP schemes is (1) conformance to selected process and/or outcome metrics, and sometimes (2) such conformance at the lowest possible cost. In other words, VBP schemes choose some number of "quality indicators" and financially incent providers to meet them (and not others). Process measures are usually based on clinical science that cannot determine the effects of a process on individual patients or patients with comorbidities, and do not necessarily measure effects that patients value; additionally, there is no provision for different patients valuing different things. Proximate outcome measures may or may not predict distal ones, and the more distal the outcome, the less reliably it can be attributed to health care. Outcome measures may be quite rudimentary, such as mortality rates, or highly contestable: survival or function after prostate surgery? When cost is an element of value-based purchasing, it is the cost to the value-based payer and not to other payers or patients' families. The greatest value of value-based purchasing may not be to patients or even payers, but to policy makers seeking a morally justifiable alternative to politically contested regulatory policies. Copyright © 2016 by Duke University Press.

  9. Approximate optimal guidance for the advanced launch system

    NASA Technical Reports Server (NTRS)

    Feeley, T. S.; Speyer, J. L.

    1993-01-01

    A real-time guidance scheme for the problem of maximizing the payload into orbit subject to the equations of motion for a rocket over a spherical, non-rotating earth is presented. An approximate optimal launch guidance law is developed based upon an asymptotic expansion of the Hamilton - Jacobi - Bellman or dynamic programming equation. The expansion is performed in terms of a small parameter, which is used to separate the dynamics of the problem into primary and perturbation dynamics. For the zeroth-order problem the small parameter is set to zero and a closed-form solution to the zeroth-order expansion term of Hamilton - Jacobi - Bellman equation is obtained. Higher-order terms of the expansion include the effects of the neglected perturbation dynamics. These higher-order terms are determined from the solution of first-order linear partial differential equations requiring only the evaluation of quadratures. This technique is preferred as a real-time, on-line guidance scheme to alternative numerical iterative optimization schemes because of the unreliable convergence properties of these iterative guidance schemes and because the quadratures needed for the approximate optimal guidance law can be performed rapidly and by parallel processing. Even if the approximate solution is not nearly optimal, when using this technique the zeroth-order solution always provides a path which satisfies the terminal constraints. Results for two-degree-of-freedom simulations are presented for the simplified problem of flight in the equatorial plane and compared to the guidance scheme generated by the shooting method which is an iterative second-order technique.

  10. Effect of exercise referral schemes in primary care on physical activity and improving health outcomes: systematic review and meta-analysis

    PubMed Central

    Taylor, A H; Fox, K R; Hillsdon, M; Anokye, N; Campbell, J L; Foster, C; Green, C; Moxham, T; Mutrie, N; Searle, J; Trueman, P; Taylor, R S

    2011-01-01

    Objective To assess the impact of exercise referral schemes on physical activity and health outcomes. Design Systematic review and meta-analysis. Data sources Medline, Embase, PsycINFO, Cochrane Library, ISI Web of Science, SPORTDiscus, and ongoing trial registries up to October 2009. We also checked study references. Study selection Design: randomised controlled trials or non-randomised controlled (cluster or individual) studies published in peer review journals. Population: sedentary individuals with or without medical diagnosis. Exercise referral schemes defined as: clear referrals by primary care professionals to third party service providers to increase physical activity or exercise, physical activity or exercise programmes tailored to individuals, and initial assessment and monitoring throughout programmes. Comparators: usual care, no intervention, or alternative exercise referral schemes. Results Eight randomised controlled trials met the inclusion criteria, comparing exercise referral schemes with usual care (six trials), alternative physical activity intervention (two), and an exercise referral scheme plus a self determination theory intervention (one). Compared with usual care, follow-up data for exercise referral schemes showed an increased number of participants who achieved 90-150 minutes of physical activity of at least moderate intensity per week (pooled relative risk 1.16, 95% confidence intervals 1.03 to 1.30) and a reduced level of depression (pooled standardised mean difference −0.82, −1.28 to −0.35). Evidence of a between group difference in physical activity of moderate or vigorous intensity or in other health outcomes was inconsistent at follow-up. We did not find any difference in outcomes between exercise referral schemes and the other two comparator groups. None of the included trials separately reported outcomes in individuals with specific medical diagnoses.Substantial heterogeneity in the quality and nature of the exercise referral schemes across studies might have contributed to the inconsistency in outcome findings. Conclusions Considerable uncertainty remains as to the effectiveness of exercise referral schemes for increasing physical activity, fitness, or health indicators, or whether they are an efficient use of resources for sedentary people with or without a medical diagnosis. PMID:22058134

  11. On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.

  12. Exploration of a Capability-Focused Aerospace System of Systems Architecture Alternative with Bilayer Design Space, Based on RST-SOM Algorithmic Methods

    PubMed Central

    Li, Zhifei; Qin, Dongliang

    2014-01-01

    In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation. PMID:24790572

  13. Exploration of a capability-focused aerospace system of systems architecture alternative with bilayer design space, based on RST-SOM algorithmic methods.

    PubMed

    Li, Zhifei; Qin, Dongliang; Yang, Feng

    2014-01-01

    In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation.

  14. Sketching Some Postmodern Alternatives: Beyond Paradigms and Research Programs as Referents for Science Education.

    ERIC Educational Resources Information Center

    Geelan, David R.

    2000-01-01

    Suggests that Kuhn's and Lakatos' schemes for the philosophy of science have been pervasive metaphors for conceptual change approaches to the learning and teaching of science, and have been used both implicitly and explicitly to provide an organizing framework and justification matrix for those perspectives. Describes four alternative perspectives…

  15. Self-adaptive Solution Strategies

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1984-01-01

    The development of enhancements to current generation nonlinear finite element algorithms of the incremental Newton-Raphson type was overviewed. Work was introduced on alternative formulations which lead to improve algorithms that avoid the need for global level updating and inversion. To quantify the enhanced Newton-Raphson scheme and the new alternative algorithm, the results of several benchmarks are presented.

  16. Income Gains for the Poor from Public Works Employment: Evidence from Two Indian Villages. Living Standards Measurement Study Working Paper No. 100.

    ERIC Educational Resources Information Center

    Datt, Gaurav; Ravallion, Martin

    "Workfare" schemes that offer poor participants unskilled jobs at low wages have become a popular alternative to cash or in-kind handouts. Yet little is known about a key determinant of the cost effectiveness of such schemes in reducing poverty: the behavioral responses through time allocation of participants and their families. These…

  17. Alternative industrial carbon emissions benchmark based on input-output analysis

    NASA Astrophysics Data System (ADS)

    Han, Mengyao; Ji, Xi

    2016-12-01

    Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.

  18. Extension of a streamwise upwind algorithm to a moving grid system

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru; Goorjian, Peter M.; Guruswamy, Guru P.

    1990-01-01

    A new streamwise upwind algorithm was derived to compute unsteady flow fields with the use of a moving-grid system. The temporally nonconservative LU-ADI (lower-upper-factored, alternating-direction-implicit) method was applied for time marching computations. A comparison of the temporally nonconservative method with a time-conservative implicit upwind method indicates that the solutions are insensitive to the conservative properties of the implicit solvers when practical time steps are used. Using this new method, computations were made for an oscillating wing at a transonic Mach number. The computed results confirm that the present upwind scheme captures the shock motion better than the central-difference scheme based on the beam-warming algorithm. The new upwind option of the code allows larger time-steps and thus is more efficient, even though it requires slightly more computational time per time step than the central-difference option.

  19. Versatile and declarative dynamic programming using pair algebras.

    PubMed

    Steffen, Peter; Giegerich, Robert

    2005-09-12

    Dynamic programming is a widely used programming technique in bioinformatics. In sharp contrast to the simplicity of textbook examples, implementing a dynamic programming algorithm for a novel and non-trivial application is a tedious and error prone task. The algebraic dynamic programming approach seeks to alleviate this situation by clearly separating the dynamic programming recurrences and scoring schemes. Based on this programming style, we introduce a generic product operation of scoring schemes. This leads to a remarkable variety of applications, allowing us to achieve optimizations under multiple objective functions, alternative solutions and backtracing, holistic search space analysis, ambiguity checking, and more, without additional programming effort. We demonstrate the method on several applications for RNA secondary structure prediction. The product operation as introduced here adds a significant amount of flexibility to dynamic programming. It provides a versatile testbed for the development of new algorithmic ideas, which can immediately be put to practice.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pern, F.J.; Glick, S.H.; Czanderna, A.W.

    The stabilization effects of various superstrate materials against UV-induced EVA discoloration and the effect of photocurrent enhancement by white light-reflecting substrates are summarized. Based on the results, some alternative PV module encapsulation schemes are proposed for improved module performance, where the current or modified formulations of EVA encapsulants still can be used so that the typical processing tools and conditions need not to be changed significantly. The schemes are designed in an attempt to eliminate or minimize the EVA yellow-browning and to increase the module power output. Four key experimental results from the studies of EVA discoloration and encapsulation aremore » to employ: (1) UV-absorbing (filtering) glasses as superstrates to protect EVA from UV-induced discoloration, (2) gas-permeable polymer films as superstrates and/or substrates to prevent EVA yellowing by permitting photobleaching reactions, (3) modified EVA formulations, and (4) internal reflection of the light by white substrates. {copyright} {ital 1996 American Institute of Physics.}« less

  1. PRESAGE: PRivacy-preserving gEnetic testing via SoftwAre Guard Extension.

    PubMed

    Chen, Feng; Wang, Chenghong; Dai, Wenrui; Jiang, Xiaoqian; Mohammed, Noman; Al Aziz, Md Momin; Sadat, Md Nazmus; Sahinalp, Cenk; Lauter, Kristin; Wang, Shuang

    2017-07-26

    Advances in DNA sequencing technologies have prompted a wide range of genomic applications to improve healthcare and facilitate biomedical research. However, privacy and security concerns have emerged as a challenge for utilizing cloud computing to handle sensitive genomic data. We present one of the first implementations of Software Guard Extension (SGX) based securely outsourced genetic testing framework, which leverages multiple cryptographic protocols and minimal perfect hash scheme to enable efficient and secure data storage and computation outsourcing. We compared the performance of the proposed PRESAGE framework with the state-of-the-art homomorphic encryption scheme, as well as the plaintext implementation. The experimental results demonstrated significant performance over the homomorphic encryption methods and a small computational overhead in comparison to plaintext implementation. The proposed PRESAGE provides an alternative solution for secure and efficient genomic data outsourcing in an untrusted cloud by using a hybrid framework that combines secure hardware and multiple crypto protocols.

  2. Data Management Systems (DMS): Complex data types study. Volume 1: Appendices A-B. Volume 2: Appendices C1-C5. Volume 3: Appendices D1-D3 and E

    NASA Technical Reports Server (NTRS)

    Leibfried, T. F., Jr.; Davari, Sadegh; Natarajan, Swami; Zhao, Wei

    1992-01-01

    Two categories were chosen for study: the issue of using a preprocessor on Ada code of Application Programs which would interface with the Run-Time Object Data Base Standard Services (RODB STSV), the intent was to catch and correct any mis-registration errors of the program coder between the user declared Objects, their types, their addresses, and the corresponding RODB definitions; and RODB STSV Performance Issues and Identification of Problems with the planned methods for accessing Primitive Object Attributes, this included the study of an alternate storage scheme to the 'store objects by attribute' scheme in the current design of the RODB. The study resulted in essentially three separate documents, an interpretation of the system requirements, an assessment of the preliminary design, and a detailing of the components of a detailed design.

  3. Comment on "Scrutinizing the carbon cycle and CO2residence time in the atmosphere" by H. Harde

    NASA Astrophysics Data System (ADS)

    Köhler, Peter; Hauck, Judith; Völker, Christoph; Wolf-Gladrow, Dieter A.; Butzin, Martin; Halpern, Joshua B.; Rice, Ken; Zeebe, Richard E.

    2018-05-01

    Harde (2017) proposes an alternative accounting scheme for the modern carbon cycle and concludes that only 4.3% of today's atmospheric CO2 is a result of anthropogenic emissions. As we will show, this alternative scheme is too simple, is based on invalid assumptions, and does not address many of the key processes involved in the global carbon cycle that are important on the timescale of interest. Harde (2017) therefore reaches an incorrect conclusion about the role of anthropogenic CO2 emissions. Harde (2017) tries to explain changes in atmospheric CO2 concentration with a single equation, while the most simple model of the carbon cycle must at minimum contain equations of at least two reservoirs (the atmosphere and the surface ocean), which are solved simultaneously. A single equation is fundamentally at odds with basic theory and observations. In the following we will (i) clarify the difference between CO2 atmospheric residence time and adjustment time, (ii) present recently published information about anthropogenic carbon, (iii) present details about the processes that are missing in Harde (2017), (iv) briefly discuss shortcoming in Harde's generalization to paleo timescales, (v) and comment on deficiencies in some of the literature cited in Harde (2017).

  4. Mapping edge-based traffic measurements onto the internal links in MPLS network

    NASA Astrophysics Data System (ADS)

    Zhao, Guofeng; Tang, Hong; Zhang, Yi

    2004-09-01

    Applying multi-protocol label switching techniques to IP-based backbone for traffic engineering goals has shown advantageous. Obtaining a volume of load on each internal link of the network is crucial for traffic engineering applying. Though collecting can be available for each link, such as applying traditional SNMP scheme, the approach may cause heavy processing load and sharply degrade the throughput of the core routers. Then monitoring merely at the edge of the network and mapping the measurements onto the core provides a good alternative way. In this paper, we explore a scheme for traffic mapping with edge-based measurements in MPLS network. It is supposed that the volume of traffic on each internal link over the domain would be mapped onto by measurements available only at ingress nodes. We apply path-based measurements at ingress nodes without enabling measurements in the core of the network. We propose a method that can infer a path from the ingress to the egress node using label distribution protocol without collecting routing data from core routers. Based on flow theory and queuing theory, we prove that our approach is effective and present the algorithm for traffic mapping. We also show performance simulation results that indicate potential of our approach.

  5. Computing Evans functions numerically via boundary-value problems

    NASA Astrophysics Data System (ADS)

    Barker, Blake; Nguyen, Rose; Sandstede, Björn; Ventura, Nathaniel; Wahl, Colin

    2018-03-01

    The Evans function has been used extensively to study spectral stability of travelling-wave solutions in spatially extended partial differential equations. To compute Evans functions numerically, several shooting methods have been developed. In this paper, an alternative scheme for the numerical computation of Evans functions is presented that relies on an appropriate boundary-value problem formulation. Convergence of the algorithm is proved, and several examples, including the computation of eigenvalues for a multi-dimensional problem, are given. The main advantage of the scheme proposed here compared with earlier methods is that the scheme is linear and scalable to large problems.

  6. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  7. Implementation of orthogonal frequency division multiplexing (OFDM) and advanced signal processing for elastic optical networking in accordance with networking and transmission constraints

    NASA Astrophysics Data System (ADS)

    Johnson, Stanley

    An increasing adoption of digital signal processing (DSP) in optical fiber telecommunication has brought to the fore several interesting DSP enabled modulation formats. One such format is orthogonal frequency division multiplexing (OFDM), which has seen great success in wireless and wired RF applications, and is being actively investigated by several research groups for use in optical fiber telecom. In this dissertation, I present three implementations of OFDM for elastic optical networking and distributed network control. The first is a field programmable gate array (FPGA) based real-time implementation of a version of OFDM conventionally known as intensity modulation and direct detection (IMDD) OFDM. I experimentally demonstrate the ability of this transmission system to dynamically adjust bandwidth and modulation format to meet networking constraints in an automated manner. To the best of my knowledge, this is the first real-time software defined networking (SDN) based control of an OFDM system. In the second OFDM implementation, I experimentally demonstrate a novel OFDM transmission scheme that supports both direct detection and coherent detection receivers simultaneously using the same OFDM transmitter. This interchangeable receiver solution enables a trade-off between bit rate and equipment cost in network deployment and upgrades. I show that the proposed transmission scheme can provide a receiver sensitivity improvement of up to 1.73 dB as compared to IMDD OFDM. I also present two novel polarization analyzer based detection schemes, and study their performance using experiment and simulation. In the third implementation, I present an OFDM pilot-tone based scheme for distributed network control. The first instance of an SDN-based OFDM elastic optical network with pilot-tone assisted distributed control is demonstrated. An improvement in spectral efficiency and a fast reconfiguration time of 30 ms have been achieved in this experiment. Finally, I experimentally demonstrate optical re-timing of a 10.7 Gb/s data stream utilizing the property of bound soliton pairs (or "soliton molecules") to relax to an equilibrium temporal separation after propagation through a nonlinear dispersion alternating fiber span. Pulses offset up to 16 ps from bit center are successfully re-timed. The optical re-timing scheme studied here is a good example of signal processing in the optical domain and such a technique can overcome the bandwidth bottleneck present in DSP. An enhanced version of this re-timing scheme is analyzed using numerical simulations.

  8. A cloud model based multi-attribute decision making approach for selection and evaluation of groundwater management schemes

    NASA Astrophysics Data System (ADS)

    Lu, Hongwei; Ren, Lixia; Chen, Yizhong; Tian, Peipei; Liu, Jia

    2017-12-01

    Due to the uncertainty (i.e., fuzziness, stochasticity and imprecision) existed simultaneously during the process for groundwater remediation, the accuracy of ranking results obtained by the traditional methods has been limited. This paper proposes a cloud model based multi-attribute decision making framework (CM-MADM) with Monte Carlo for the contaminated-groundwater remediation strategies selection. The cloud model is used to handle imprecise numerical quantities, which can describe the fuzziness and stochasticity of the information fully and precisely. In the proposed approach, the contaminated concentrations are aggregated via the backward cloud generator and the weights of attributes are calculated by employing the weight cloud module. A case study on the remedial alternative selection for a contaminated site suffering from a 1,1,1-trichloroethylene leakage problem in Shanghai, China is conducted to illustrate the efficiency and applicability of the developed approach. Totally, an attribute system which consists of ten attributes were used for evaluating each alternative through the developed method under uncertainty, including daily total pumping rate, total cost and cloud model based health risk. Results indicated that A14 was evaluated to be the most preferred alternative for the 5-year, A5 for the 10-year, A4 for the 15-year and A6 for the 20-year remediation.

  9. Enhancing LoRaWAN Security through a Lightweight and Authenticated Key Management Approach.

    PubMed

    Sanchez-Iborra, Ramon; Sánchez-Gómez, Jesús; Pérez, Salvador; Fernández, Pedro J; Santa, José; Hernández-Ramos, José L; Skarmeta, Antonio F

    2018-06-05

    Luckily, new communication technologies and protocols are nowadays designed considering security issues. A clear example of this can be found in the Internet of Things (IoT) field, a quite recent area where communication technologies such as ZigBee or IPv6 over Low power Wireless Personal Area Networks (6LoWPAN) already include security features to guarantee authentication, confidentiality and integrity. More recent technologies are Low-Power Wide-Area Networks (LP-WAN), which also consider security, but present initial approaches that can be further improved. An example of this can be found in Long Range (LoRa) and its layer-two supporter LoRa Wide Area Network (LoRaWAN), which include a security scheme based on pre-shared cryptographic material lacking flexibility when a key update is necessary. Because of this, in this work, we evaluate the security vulnerabilities of LoRaWAN in the area of key management and propose different alternative schemes. Concretely, the application of an approach based on the recently specified Ephemeral Diffie⁻Hellman Over COSE (EDHOC) is found as a convenient solution, given its flexibility in the update of session keys, its low computational cost and the limited message exchanges needed. A comparative conceptual analysis considering the overhead of different security schemes for LoRaWAN is carried out in order to evaluate their benefits in the challenging area of LP-WAN.

  10. Visceral Leishmaniasis on the Indian Subcontinent: Modelling the Dynamic Relationship between Vector Control Schemes and Vector Life Cycles.

    PubMed

    Poché, David M; Grant, William E; Wang, Hsiao-Hsuan

    2016-08-01

    Visceral leishmaniasis (VL) is a disease caused by two known vector-borne parasite species (Leishmania donovani, L. infantum), transmitted to man by phlebotomine sand flies (species: Phlebotomus and Lutzomyia), resulting in ≈50,000 human fatalities annually, ≈67% occurring on the Indian subcontinent. Indoor residual spraying is the current method of sand fly control in India, but alternative means of vector control, such as the treatment of livestock with systemic insecticide-based drugs, are being evaluated. We describe an individual-based, stochastic, life-stage-structured model that represents a sand fly vector population within a village in India and simulates the effects of vector control via fipronil-based drugs orally administered to cattle, which target both blood-feeding adults and larvae that feed on host feces. Simulation results indicated efficacy of fipronil-based control schemes in reducing sand fly abundance depended on timing of drug applications relative to seasonality of the sand fly life cycle. Taking into account cost-effectiveness and logistical feasibility, two of the most efficacious treatment schemes reduced population peaks occurring from April through August by ≈90% (applications 3 times per year at 2-month intervals initiated in March) and >95% (applications 6 times per year at 2-month intervals initiated in January) relative to no control, with the cumulative number of sand fly days occurring April-August reduced by ≈83% and ≈97%, respectively, and more specifically during the summer months of peak human exposure (June-August) by ≈85% and ≈97%, respectively. Our model should prove useful in a priori evaluation of the efficacy of fipronil-based drugs in controlling leishmaniasis on the Indian subcontinent and beyond.

  11. Evaluation of appropriate technologies for grey water treatments and reuses.

    PubMed

    Li, Fangyue; Wichmann, Knut; Otterpohl, Ralf

    2009-01-01

    As water is becoming a rare resource, the onsite reuse and recycling of grey water is practiced in many countries as a sustainable solution to reduce the overall urban water demand. However, the lack of appropriate water quality standards or guidelines has hampered the appropriate grey water reuses. Based on literature review, a non-potable urban grey water treatment and reuse scheme is proposed and the treatment alternatives for grey water reuse are evaluated according to the grey water characteristics, the proposed standards and economical feasibility.

  12. Progress in understanding heavy-ion stopping

    NASA Astrophysics Data System (ADS)

    Sigmund, P.; Schinner, A.

    2016-09-01

    We report some highlights of our work with heavy-ion stopping in the energy range where Bethe stopping theory breaks down. Main tools are our binary stopping theory (PASS code), the reciprocity principle, and Paul's data base. Comparisons are made between PASS and three alternative theoretical schemes (CasP, HISTOP and SLPA). In addition to equilibrium stopping we discuss frozen-charge stopping, deviations from linear velocity dependence below the Bragg peak, application of the reciprocity principle in low-velocity stopping, modeling of equilibrium charges, and the significance of the so-called effective charge.

  13. Electrical motor/generator drive apparatus and method

    DOEpatents

    Su, Gui Jia

    2013-02-12

    The present disclosure includes electrical motor/generator drive systems and methods that significantly reduce inverter direct-current (DC) bus ripple currents and thus the volume and cost of a capacitor. The drive methodology is based on a segmented drive system that does not add switches or passive components but involves reconfiguring inverter switches and motor stator winding connections in a way that allows the formation of multiple, independent drive units and the use of simple alternated switching and optimized Pulse Width Modulation (PWM) schemes to eliminate or significantly reduce the capacitor ripple current.

  14. Adaptive Packet Combining Scheme in Three State Channel Model

    NASA Astrophysics Data System (ADS)

    Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak

    2018-01-01

    The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.

  15. The study of PDF turbulence models in combustion

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    In combustion computations, it is known that the predictions of chemical reaction rates are poor if conventional turbulence models are used. The probability density function (pdf) method seems to be the only alternative that uses local instantaneous values of the temperature, density, etc., in predicting chemical reaction rates, and thus is the only viable approach for more accurate turbulent combustion calculations. The fact that the pdf equation has a very large dimensionality renders finite difference schemes extremely demanding on computer memories and thus impractical. A logical alternative is the Monte Carlo scheme. Since CFD has a certain maturity as well as acceptance, it seems that the use of a combined CFD and Monte Carlo scheme is more beneficial. Therefore, a scheme is chosen that uses a conventional CFD flow solver in calculating the flow field properties such as velocity, pressure, etc., while the chemical reaction part is solved using a Monte Carlo scheme. The discharge of a heated turbulent plane jet into quiescent air was studied. Experimental data for this problem shows that when the temperature difference between the jet and the surrounding air is small, buoyancy effect can be neglected and the temperature can be treated as a passive scalar. The fact that jet flows have a self-similar solution lends convenience in the modeling study. Futhermore, the existence of experimental data for turbulent shear stress and temperature variance make the case ideal for the testing of pdf models wherein these values can be directly evaluated.

  16. Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy

    NASA Astrophysics Data System (ADS)

    Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente

    2017-02-01

    We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes.

  17. Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy

    PubMed Central

    Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente

    2017-01-01

    We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes. PMID:28233829

  18. The Emergent Universe scheme and tunneling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labraña, Pedro

    We present an alternative scheme for an Emergent Universe scenario, developed previously in Phys. Rev. D 86, 083524 (2012), where the universe is initially in a static state supported by a scalar field located in a false vacuum. The universe begins to evolve when, by quantum tunneling, the scalar field decays into a state of true vacuum. The Emergent Universe models are interesting since they provide specific examples of non-singular inflationary universes.

  19. Isolating Flow-field Discontinuities while Preserving Monotonicity and High-order Accuracy on Cartesian Meshes

    DTIC Science & Technology

    2017-01-09

    2017 Distribution A – Approved for public release; Distribution Unlimited. PA Clearance 17030 Introduction • Filtering schemes offer a less...dissipative alternative to the standard artificial dissipation operators when applied to high- order spatial/temporal schemes • Limiting Fact: Filters impart...systems require a preconditioned dual-time framework to be solved efficiently • Limiting Fact: Filtering cannot be applied only at the physical- time

  20. Range Sidelobe Suppression Using Complementary Sets in Distributed Multistatic Radar Networks

    PubMed Central

    Wang, Xuezhi; Song, Yongping; Huang, Xiaotao; Moran, Bill

    2017-01-01

    We propose an alternative waveform scheme built on mutually-orthogonal complementary sets for a distributed multistatic radar. Our analysis and simulation show a reduced frequency band requirement for signal separation between antennas with centralized signal processing using the same carrier frequency. While the scheme can tolerate fluctuations of carrier frequencies and phases, range sidelobes arise when carrier frequencies between antennas are significantly different. PMID:29295566

  1. Long-distance quantum communication over noisy networks without long-time quantum memory

    NASA Astrophysics Data System (ADS)

    Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Łodyga, Justyna; Pankowski, Łukasz; PrzysieŻna, Anna

    2014-12-01

    The problem of sharing entanglement over large distances is crucial for implementations of quantum cryptography. A possible scheme for long-distance entanglement sharing and quantum communication exploits networks whose nodes share Einstein-Podolsky-Rosen (EPR) pairs. In Perseguers et al. [Phys. Rev. A 78, 062324 (2008), 10.1103/PhysRevA.78.062324] the authors put forward an important isomorphism between storing quantum information in a dimension D and transmission of quantum information in a D +1 -dimensional network. We show that it is possible to obtain long-distance entanglement in a noisy two-dimensional (2D) network, even when taking into account that encoding and decoding of a state is exposed to an error. For 3D networks we propose a simple encoding and decoding scheme based solely on syndrome measurements on 2D Kitaev topological quantum memory. Our procedure constitutes an alternative scheme of state injection that can be used for universal quantum computation on 2D Kitaev code. It is shown that the encoding scheme is equivalent to teleporting the state, from a specific node into a whole two-dimensional network, through some virtual EPR pair existing within the rest of network qubits. We present an analytic lower bound on fidelity of the encoding and decoding procedure, using as our main tool a modified metric on space-time lattice, deviating from a taxicab metric at the first and the last time slices.

  2. Income-based equity weights in healthcare planning and policy.

    PubMed

    Herlitz, Anders

    2017-08-01

    Recent research indicates that there is a gap in life expectancy between the rich and the poor. This raises the question: should we on egalitarian grounds use income-based equity weights when we assess benefits of alternative benevolent interventions, so that health benefits to the poor count for more? This article provides three egalitarian arguments for using income-based equity weights under certain circumstances. If income inequality correlates with inequality in health, we have reason to use income-based equity weights on the ground that health inequality is bad. If income inequality correlates with inequality in opportunity for health, we have reason to use such weights on the ground that inequality in opportunity for health is bad. If income inequality correlates with inequality in well-being, income-based equity weights should be used to mitigate inequality in well-being. Three different ways in which to construe income-based equity weights are introduced and discussed. They can be based on relative income inequality, on income rankings and on capped absolute income. The article does not defend any of these types of weighting schemes, but argues that in order to settle which of these types of weighting scheme to choose, more empirical research is needed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  3. Intercomparison of spectral irradiance measurements and provision of alternative radiation scheme for CCMs of middle atmosphere

    NASA Astrophysics Data System (ADS)

    Pagaran, Joseph; Weber, Mark; Burrows, John P.

    The Sun's radiative output (total solar irradiance or TSI) determines the thermal structure of the Earth's atmosphere. Its variability is a strong function of wavelength, which drives the photochemistry and general circulation. Contributions to TSI variability from UV wavelengths below 400 nm, i.e. 0.227-day solar rotation or 0.1to be in the 40-60three decades of UV and about a decade of vis-IR observations. Significant progress in UV/vis-IR regions has been achieved with daily monitoring from SCIAMACHY aboard Envisat (ESA) in 2002 and by SIM aboard SORCE (NASA) about a year after. In this contribution, we intercompare SSI measurements from SCIAMACHY and SIM and RGB filters of SPM/VIRGO SoHO: same (a) day and (b) few 27-day time series of spectral measurements in both irradiance and integrated irradiance over selected wavelength intervals. Finally, we show how SSI measurements from GOME, SOLSTICE, in addition to SCIAMACHY and SIM, can be modeled together with solar proxies F10.7 cm, Mg II and sunspot index (PSI) to derive daily SSI variability in the period 1947-2008. The derived variabilities are currently being used as solar input to Bremen's 3D-CTM and are to be recommended as extended alternative to Berlin's FUBRaD radiation scheme. This proxy-based radiation scheme are compared with SATIRE, NRLSSI (or Lean et al.), SUSIM, SSAI (or DeLand et al), and SIP (or Solar2000) models. The use of realistic spectrally resolved solar input to CCMs is to better understand the effects of solar variability on chemistry and temperature in the middle atmosphere over several decades.

  4. Control of parallel manipulators using force feedback

    NASA Technical Reports Server (NTRS)

    Nanua, Prabjot

    1994-01-01

    Two control schemes are compared for parallel robotic mechanisms actuated by hydraulic cylinders. One scheme, the 'rate based scheme', uses the position and rate information only for feedback. The second scheme, the 'force based scheme' feeds back the force information also. The force control scheme is shown to improve the response over the rate control one. It is a simple constant gain control scheme better suited to parallel mechanisms. The force control scheme can be easily modified for the dynamic forces on the end effector. This paper presents the results of a computer simulation of both the rate and force control schemes. The gains in the force based scheme can be individually adjusted in all three directions, whereas the adjustment in just one direction of the rate based scheme directly affects the other two directions.

  5. CEPC booster design study

    DOE PAGES

    Bian, Tianjian; Gao, Jie; Zhang, Chuang; ...

    2017-12-10

    In September 2012, Chinese scientists proposed a Circular Electron Positron Collider (CEPC) in China at 240 GeV center-of-mass energy for Higgs studies. The booster provides 120 GeV electron and positron beams to the CEPC collider for top-up injection at 0.1 Hz. The design of the full energy booster ring of the CEPC is a challenge. The ejected beam energy is 120 GeV and the injected beam energy is 6 GeV. Here in this paper we describe two alternative schemes, the wiggler bend scheme and the normal bend scheme. For the wiggler bend scheme, we propose to operate the booster ringmore » as a large wiggler at low energy and as a normal ring at high energy to avoid the problem of very low dipole magnet fields. Finally, for the normal bend scheme, we implement the orbit correction to correct the earth field.« less

  6. CEPC booster design study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bian, Tianjian; Gao, Jie; Zhang, Chuang

    In September 2012, Chinese scientists proposed a Circular Electron Positron Collider (CEPC) in China at 240 GeV center-of-mass energy for Higgs studies. The booster provides 120 GeV electron and positron beams to the CEPC collider for top-up injection at 0.1 Hz. The design of the full energy booster ring of the CEPC is a challenge. The ejected beam energy is 120 GeV and the injected beam energy is 6 GeV. Here in this paper we describe two alternative schemes, the wiggler bend scheme and the normal bend scheme. For the wiggler bend scheme, we propose to operate the booster ringmore » as a large wiggler at low energy and as a normal ring at high energy to avoid the problem of very low dipole magnet fields. Finally, for the normal bend scheme, we implement the orbit correction to correct the earth field.« less

  7. Changes in animal performance and profitability of Holstein dairy operations after introduction of crossbreeding with Montbéliarde, Normande, and Scandinavian Red.

    PubMed

    Dezetter, C; Bareille, N; Billon, D; Côrtes, C; Lechartier, C; Seegers, H

    2017-10-01

    An individual-based mechanistic, stochastic, and dynamic simulation model was developed to assess economic effects resulting from changes in performance for milk yield and solid contents, reproduction, health, and replacement, induced by the introduction of crossbreeding in Holstein dairy operations. Three crossbreeding schemes, Holstein × Montbéliarde, Holstein × Montbéliarde × Normande, and Holstein × Montbéliarde × Scandinavian Red, were implemented in Holstein dairy operations and compared with Holstein pure breeding. Sires were selected based on their estimated breeding value for milk. Two initial operations were simulated according to the prevalence (average or high) of reproductive and health disorders in the lactating herd. Evolution of operations was simulated during 15 yr under 2 alternative managerial goals (constant number of cows or constant volume of milk sold). After 15 yr, breed percentages reached equilibrium for the 2-breed but not for the 3-breed schemes. After 5 yr of simulation, all 3 crossbreeding schemes reduced average milk yield per cow-year compared with the pure Holstein scheme. Changes in other animal performance (milk solid contents, reproduction, udder health, and longevity) were always in favor of crossbreeding schemes. Under an objective of constant number of cows, margin over variable costs in average discounted value over the 15 yr of simulation was slightly increased by crossbreeding schemes, with an average prevalence of disorders up to €32/cow-year. In operations with a high prevalence of disorders, crossbreeding schemes increased the margin over variable costs up to €91/cow-year. Under an objective of constant volume of milk sold, crossbreeding schemes improved margin over variable costs up to €10/1,000L (corresponding to around €96/cow-year) for average prevalence of disorders, and up to €13/1,000L (corresponding to around €117/cow-year) for high prevalence of disorders. Under an objective of constant number of cows, an unfavorable pricing context (milk price vs. concentrates price) increased slightly crossbreeding positive effects on margin over variable costs. Under an objective of constant volume of milk, only very limited changes in differences of margins were found between the breeding schemes. Our results, obtained conditionally to the parameterization values used here, suggest that dairy crossbreeding should be considered as a relevant option for Holstein dairy operations with a production level until 9,000 kg/cow-year in France, and possibly in other countries. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. Hybrid estimation of complex systems.

    PubMed

    Hofbaur, Michael W; Williams, Brian C

    2004-10-01

    Modern automated systems evolve both continuously and discretely, and hence require estimation techniques that go well beyond the capability of a typical Kalman Filter. Multiple model (MM) estimation schemes track these system evolutions by applying a bank of filters, one for each discrete system mode. Modern systems, however, are often composed of many interconnected components that exhibit rich behaviors, due to complex, system-wide interactions. Modeling these systems leads to complex stochastic hybrid models that capture the large number of operational and failure modes. This large number of modes makes a typical MM estimation approach infeasible for online estimation. This paper analyzes the shortcomings of MM estimation, and then introduces an alternative hybrid estimation scheme that can efficiently estimate complex systems with large number of modes. It utilizes search techniques from the toolkit of model-based reasoning in order to focus the estimation on the set of most likely modes, without missing symptoms that might be hidden amongst the system noise. In addition, we present a novel approach to hybrid estimation in the presence of unknown behavioral modes. This leads to an overall hybrid estimation scheme for complex systems that robustly copes with unforeseen situations in a degraded, but fail-safe manner.

  9. Communication: Density functional theory embedding with the orthogonality constrained basis set expansion procedure

    NASA Astrophysics Data System (ADS)

    Culpitt, Tanner; Brorsen, Kurt R.; Hammes-Schiffer, Sharon

    2017-06-01

    Density functional theory (DFT) embedding approaches have generated considerable interest in the field of computational chemistry because they enable calculations on larger systems by treating subsystems at different levels of theory. To circumvent the calculation of the non-additive kinetic potential, various projector methods have been developed to ensure the orthogonality of molecular orbitals between subsystems. Herein the orthogonality constrained basis set expansion (OCBSE) procedure is implemented to enforce this subsystem orbital orthogonality without requiring a level shifting parameter. This scheme is a simple alternative to existing parameter-free projector-based schemes, such as the Huzinaga equation. The main advantage of the OCBSE procedure is that excellent convergence behavior is attained for DFT-in-DFT embedding without freezing any of the subsystem densities. For the three chemical systems studied, the level of accuracy is comparable to or higher than that obtained with the Huzinaga scheme with frozen subsystem densities. Allowing both the high-level and low-level DFT densities to respond to each other during DFT-in-DFT embedding calculations provides more flexibility and renders this approach more generally applicable to chemical systems. It could also be useful for future extensions to embedding approaches combining wavefunction theories and DFT.

  10. Interpretation for scales of measurement linking with abstract algebra

    PubMed Central

    2014-01-01

    The Stevens classification of levels of measurement involves four types of scale: “Nominal”, “Ordinal”, “Interval” and “Ratio”. This classification has been used widely in medical fields and has accomplished an important role in composition and interpretation of scale. With this classification, levels of measurements appear organized and validated. However, a group theory-like systematization beckons as an alternative because of its logical consistency and unexceptional applicability in the natural sciences but which may offer great advantages in clinical medicine. According to this viewpoint, the Stevens classification is reformulated within an abstract algebra-like scheme; ‘Abelian modulo additive group’ for “Ordinal scale” accompanied with ‘zero’, ‘Abelian additive group’ for “Interval scale”, and ‘field’ for “Ratio scale”. Furthermore, a vector-like display arranges a mixture of schemes describing the assessment of patient states. With this vector-like notation, data-mining and data-set combination is possible on a higher abstract structure level based upon a hierarchical-cluster form. Using simple examples, we show that operations acting on the corresponding mixed schemes of this display allow for a sophisticated means of classifying, updating, monitoring, and prognosis, where better data mining/data usage and efficacy is expected. PMID:24987515

  11. Interpretation for scales of measurement linking with abstract algebra.

    PubMed

    Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun

    2014-01-01

    THE STEVENS CLASSIFICATION OF LEVELS OF MEASUREMENT INVOLVES FOUR TYPES OF SCALE: "Nominal", "Ordinal", "Interval" and "Ratio". This classification has been used widely in medical fields and has accomplished an important role in composition and interpretation of scale. With this classification, levels of measurements appear organized and validated. However, a group theory-like systematization beckons as an alternative because of its logical consistency and unexceptional applicability in the natural sciences but which may offer great advantages in clinical medicine. According to this viewpoint, the Stevens classification is reformulated within an abstract algebra-like scheme; 'Abelian modulo additive group' for "Ordinal scale" accompanied with 'zero', 'Abelian additive group' for "Interval scale", and 'field' for "Ratio scale". Furthermore, a vector-like display arranges a mixture of schemes describing the assessment of patient states. With this vector-like notation, data-mining and data-set combination is possible on a higher abstract structure level based upon a hierarchical-cluster form. Using simple examples, we show that operations acting on the corresponding mixed schemes of this display allow for a sophisticated means of classifying, updating, monitoring, and prognosis, where better data mining/data usage and efficacy is expected.

  12. A low-complexity and high performance concatenated coding scheme for high-speed satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Rhee, Dojun; Rajpal, Sandeep

    1993-01-01

    This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.

  13. A hydrological emulator for global applications - HE v1.0.0

    NASA Astrophysics Data System (ADS)

    Liu, Yaling; Hejazi, Mohamad; Li, Hongyi; Zhang, Xuesong; Leng, Guoyong

    2018-03-01

    While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluated in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling-Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.

  14. Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration

    NASA Astrophysics Data System (ADS)

    Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola

    In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.

  15. t4 Workshop Report

    PubMed Central

    Silbergeld, Ellen K.; Contreras, Elizabeth Q.; Hartung, Thomas; Hirsch, Cordula; Hogberg, Helena; Jachak, Ashish C.; Jordan, William; Landsiedel, Robert; Morris, Jeffery; Patri, Anil; Pounds, Joel G.; de Vizcaya Ruiz, Andrea; Shvedova, Anna; Tanguay, Robert; Tatarazako, Norihasa; van Vliet, Erwin; Walker, Nigel J.; Wiesner, Mark; Wilcox, Neil; Zurlo, Joanne

    2014-01-01

    Summary In October 2010, a group of experts met as part of the transatlantic think tank for toxicology (t4) to exchange ideas about the current status and future of safety testing of nanomaterials. At present, there is no widely accepted path forward to assure appropriate and effective hazard identification for engineered nanomaterials. The group discussed needs for characterization of nanomaterials and identified testing protocols that incorporate the use of innovative alternative whole models such as zebrafish or C. elegans, as well as in vitro or alternative methods to examine specific functional pathways and modes of action. The group proposed elements of a potential testing scheme for nanomaterials that works towards an integrated testing strategy, incorporating the goals of the NRC report Toxicity Testing in the 21st Century: A Vision and a Strategy by focusing on pathways of toxic response, and utilizing an evidence-based strategy for developing the knowledge base for safety assessment. Finally, the group recommended that a reliable, open, curated database be developed that interfaces with existing databases to enable sharing of information. PMID:21993959

  16. Comment on "An improved gray Lattice Boltzmann model for simulating fluid flow in multi-scale porous media": Intrinsic links between LBE Brinkman schemes

    NASA Astrophysics Data System (ADS)

    Ginzburg, Irina

    2016-02-01

    In this Comment on the recent work (Zhu and Ma, 2013) [11] by Zhu and Ma (ZM) we first show that all three local gray Lattice Boltzmann (GLB) schemes in the form (Zhu and Ma, 2013) [11]: GS (Chen and Zhu, 2008; Gao and Sharma, 1994) [1,4], WBS (Walsh et al., 2009) [12] and ZM, fail to get constant Darcy's velocity in series of porous blocks. This inconsistency is because of their incorrect definition of the macroscopic velocity in the presence of the heterogeneous momentum exchange, while the original WBS model (Walsh et al., 2009) [12] does this properly. We improve the GS and ZM schemes for this and other related deficiencies. Second, we show that the ;discontinuous velocity; they recover on the stratified interfaces with their WBS scheme is inherent, in different degrees, to all LBE Brinkman schemes, including ZM scheme. None of them guarantees the stress and the velocity continuity by their implicit interface conditions, even in the frame of the two-relaxation-times (TRT) collision operator where these two properties are assured in stratified Stokes flow, Ginzburg (2007) [5]. Third, the GLB schemes are presented in work (Zhu and Ma, 2013) [11] as the alternative ones to direct, Brinkman-force based (BF) schemes (Freed, 1998; Nie and Martys, 2007) [3,8]. Yet, we show that the BF-TRT scheme (Ginzburg, 2008) [6] gets the solutions of any of the improved GLB schemes for specific, viscosity-dependent choice of its one or two local relaxation rates. This provides the principal difference between the GLB and BF: while the BF may respect the linearity of the Stokes-Brinkman equation rigorously, the GLB-TRT cannot, unless it reduces to the BF via the inverse transform of the relaxation rates. Furthermore, we show that, in limited parameter space, ;gray; schemes may run one another. From the practical point of view, permeability values obtained with the GLB are viscosity-dependent, unlike with the BF. Finally, the GLB shares with the BF a so-called anisotropy (Ginzburg, 2008; Nie and Martys, 2007) [6,8], that is, flow-direction-dependency in their effective viscosity corrections, related to the discretized spatial variation of the resistance forcing.

  17. A Novel Optimal Joint Resource Allocation Method in Cooperative Multicarrier Networks: Theory and Practice

    PubMed Central

    Gao, Yuan; Zhou, Weigui; Ao, Hong; Chu, Jian; Zhou, Quan; Zhou, Bo; Wang, Kang; Li, Yi; Xue, Peng

    2016-01-01

    With the increasing demands for better transmission speed and robust quality of service (QoS), the capacity constrained backhaul gradually becomes a bottleneck in cooperative wireless networks, e.g., in the Internet of Things (IoT) scenario in joint processing mode of LTE-Advanced Pro. This paper focuses on resource allocation within capacity constrained backhaul in uplink cooperative wireless networks, where two base stations (BSs) equipped with single antennae serve multiple single-antennae users via multi-carrier transmission mode. In this work, we propose a novel cooperative transmission scheme based on compress-and-forward with user pairing to solve the joint mixed integer programming problem. To maximize the system capacity under the limited backhaul, we formulate the joint optimization problem of user sorting, subcarrier mapping and backhaul resource sharing among different pairs (subcarriers for users). A novel robust and efficient centralized algorithm based on alternating optimization strategy and perfect mapping is proposed. Simulations show that our novel method can improve the system capacity significantly under the constraint of the backhaul resource compared with the blind alternatives. PMID:27077865

  18. A gas kinetic scheme for hybrid simulation of partially rarefied flows

    NASA Astrophysics Data System (ADS)

    Colonia, S.; Steijl, R.; Barakos, G.

    2017-06-01

    Approaches to predict flow fields that display rarefaction effects incur a cost in computational time and memory considerably higher than methods commonly employed for continuum flows. For this reason, to simulate flow fields where continuum and rarefied regimes coexist, hybrid techniques have been introduced. In the present work, analytically defined gas-kinetic schemes based on the Shakhov and Rykov models for monoatomic and diatomic gas flows, respectively, are proposed and evaluated with the aim to be used in the context of hybrid simulations. This should reduce the region where more expensive methods are needed by extending the validity of the continuum formulation. Moreover, since for high-speed rare¦ed gas flows it is necessary to take into account the nonequilibrium among the internal degrees of freedom, the extension of the approach to employ diatomic gas models including rotational relaxation process is a mandatory first step towards realistic simulations. Compared to previous works of Xu and coworkers, the presented scheme is de¦ned directly on the basis of kinetic models which involve a Prandtl number correction. Moreover, the methods are defined fully analytically instead of making use of Taylor expansion for the evaluation of the required derivatives. The scheme has been tested for various test cases and Mach numbers proving to produce reliable predictions in agreement with other approaches for near-continuum flows. Finally, the performance of the scheme, in terms of memory and computational time, compared to discrete velocity methods makes it a compelling alternative in place of more complex methods for hybrid simulations of weakly rarefied flows.

  19. [PERSPECTIVES OF DEVELOPMENT OF LIVE RECOMBINANT ANTHRAX VACCINES BASED ON OPPORTUNISTIC AND APATHOGENIC MICROORGANISMS].

    PubMed

    Popova, P Yu; Mikshis, N I

    2016-01-01

    Live genetic engineering anthrax vaccines on the platform of avirulent and probiotic micro-organisms are a safe and adequate alternative to preparations based on attenuated Bacillus anthracis strains. Mucosal application results in a direct contact of the vaccine preparations with mucous membranes in those organs arid tissues of the macro-organisms, that are exposed to the pathogen in the first place, resulting in a development of local and systemic immune response. Live recombinant anthrax vaccines could be used both separately as well as in a prime-boost immunization scheme. The review focuses on immunogenic and protective properties of experimental live genetic engineering prearations, created based on members of geni of Salmonella, Lactobacillus and adenoviruses.

  20. Introducing a new methodology for the calculation of local philicity and multiphilic descriptor: an alternative to the finite difference approximation

    NASA Astrophysics Data System (ADS)

    Sánchez-Márquez, Jesús; Zorrilla, David; García, Víctor; Fernández, Manuel

    2018-07-01

    This work presents a new development based on the condensation scheme proposed by Chamorro and Pérez, in which new terms to correct the frozen molecular orbital approximation have been introduced (improved frontier molecular orbital approximation). The changes performed on the original development allow taking into account the orbital relaxation effects, providing equivalent results to those achieved by the finite difference approximation and leading also to a methodology with great advantages. Local reactivity indices based on this new development have been obtained for a sample set of molecules and they have been compared with those indices based on the frontier molecular orbital and finite difference approximations. A new definition based on the improved frontier molecular orbital methodology for the dual descriptor index is also shown. In addition, taking advantage of the characteristics of the definitions obtained with the new condensation scheme, the descriptor local philicity is analysed by separating the components corresponding to the frontier molecular orbital approximation and orbital relaxation effects, analysing also the local parameter multiphilic descriptor in the same way. Finally, the effect of using the basis set is studied and calculations using DFT, CI and Möller-Plesset methodologies are performed to analyse the consequence of different electronic-correlation levels.

  1. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries.

    PubMed

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  2. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries

    NASA Astrophysics Data System (ADS)

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  3. Active identification and control of aerodynamic instabilities in axial and centrifugal compressors

    NASA Astrophysics Data System (ADS)

    Krichene, Assad

    In this thesis, it is experimentally shown that dynamic cursors to stall and surge exist in both axial and centrifugal compressors using the experimental axial and centrifugal compressor rigs located in the School of Aerospace Engineering at the Georgia Institute of Technology. Further, it is shown that the dynamic cursors to stall and surge can be identified in real-time and they can be used in a simple control scheme to avoid the occurrence of stall and surge instabilities altogether. For the centrifugal compressor, a previously developed real-time observer is used in order to detect dynamic cursors to surge in real-time. An off-line analysis using the Fast Fourier Transform (FFT) of the open loop experimental data from the centrifugal compressor rig is carried out to establish the influence of compressor speed on the dynamic cursor frequency. The variation of the amplitude of dynamic cursors with compressor operating condition from experimental data is qualitatively compared with simulation results obtained using a generic compression system model subjected to white noise excitation. Using off-line analysis results, a simple control scheme based on fuzzy logic is synthesized for surge avoidance and recovery. The control scheme is implemented in the centrifugal compressor rig using compressor bleed as well as fuel flow to the combustor. Closed loop experimental results are obtained to demonstrate the effectiveness of the controller for both surge avoidance and surge recovery. The existence of stall cursors in an axial compression system is established using the observer scheme from off-line analysis of an existing database of a commercial gas turbine engine. However, the observer scheme is found to be ineffective in detecting stall cursors in the experimental axial compressor rig in the School of Aerospace Engineering at the Georgia Institute of Technology. An alternate scheme based on the amplitude of pressure data content at the blade passage frequency obtained using a pressure sensor located (in the casing) over the blade row is developed and used in the axial compressor rig for stall and surge avoidance and recovery. (Abstract shortened by UMI.)

  4. Fourth order Douglas implicit scheme for solving three dimension reaction diffusion equation with non-linear source term

    NASA Astrophysics Data System (ADS)

    Hasnain, Shahid; Saqib, Muhammad; Mashat, Daoud Suleiman

    2017-07-01

    This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit) to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.

  5. Development of a methodology for classifying software errors

    NASA Technical Reports Server (NTRS)

    Gerhart, S. L.

    1976-01-01

    A mathematical formalization of the intuition behind classification of software errors is devised and then extended to a classification discipline: Every classification scheme should have an easily discernible mathematical structure and certain properties of the scheme should be decidable (although whether or not these properties hold is relative to the intended use of the scheme). Classification of errors then becomes an iterative process of generalization from actual errors to terms defining the errors together with adjustment of definitions according to the classification discipline. Alternatively, whenever possible, small scale models may be built to give more substance to the definitions. The classification discipline and the difficulties of definition are illustrated by examples of classification schemes from the literature and a new study of observed errors in published papers of programming methodologies.

  6. Progressing Knowledge in Alternative and Local Food Networks: Critical Reflections and a Research Agenda

    ERIC Educational Resources Information Center

    Tregear, Angela

    2011-01-01

    In the now extensive literature on alternative food networks (AFNs) (e.g. farmers' markets, community supported agriculture, box schemes), a body of work has pointed to socio-economic problems with such systems, which run counter to headline claims in the literature. This paper argues that rather than being a reflection of inherent complexities in…

  7. Land Application of Wastes: An Educational Program. Non-Crop and Forest Systems - Module 13, Objectives, and Script.

    ERIC Educational Resources Information Center

    Clarkson, W. W.; And Others

    This module discusses the characteristics of alternate sites and management schemes and attempts to evaluate the efficiency of each alternative in terms of waste treatment. Three types of non-crop land application are discussed: (1) forest lands; (2) park and recreational application; and (3) land reclamation in surface or strip mined areas. (BB)

  8. Space Station racks weight and CG measurement using the rack insertion end-effector

    NASA Technical Reports Server (NTRS)

    Brewer, William V.

    1994-01-01

    The objective was to design a method to measure weight and center of gravity (C.G.) location for Space Station Modules by adding sensors to the existing Rack Insertion End Effector (RIEE). Accomplishments included alternative sensor placement schemes organized into categories. Vendors were queried for suitable sensor equipment recommendations. Inverse mathematical models for each category determine expected maximum sensor loads. Sensors are selected using these computations, yielding cost and accuracy data. Accuracy data for individual sensors are inserted into forward mathematical models to estimate the accuracy of an overall sensor scheme. Cost of the schemes can be estimated. Ease of implementation and operation are discussed.

  9. Counterfactual entanglement distribution without transmitting any particles.

    PubMed

    Guo, Qi; Cheng, Liu-Yong; Chen, Li; Wang, Hong-Fu; Zhang, Shou

    2014-04-21

    To date, all schemes for entanglement distribution needed to send entangled particles or a separable mediating particle among distant participants. Here, we propose a counterfactual protocol for entanglement distribution against the traditional forms, that is, two distant particles can be entangled with no physical particles travel between the two remote participants. We also present an alternative scheme for realizing the counterfactual photonic entangled state distribution using Michelson-type interferometer and self-assembled GaAs/InAs quantum dot embedded in a optical microcavity. The numerical analysis about the effect of experimental imperfections on the performance of the scheme shows that the entanglement distribution may be implementable with high fidelity.

  10. Blind ICA detection based on second-order cone programming for MC-CDMA systems

    NASA Astrophysics Data System (ADS)

    Jen, Chih-Wei; Jou, Shyh-Jye

    2014-12-01

    The multicarrier code division multiple access (MC-CDMA) technique has received considerable interest for its potential application to future wireless communication systems due to its high data rate. A common problem regarding the blind multiuser detectors used in MC-CDMA systems is that they are extremely sensitive to the complex channel environment. Besides, the perturbation of colored noise may negatively affect the performance of the system. In this paper, a new coherent detection method will be proposed, which utilizes the modified fast independent component analysis (FastICA) algorithm, based on approximate negentropy maximization that is subject to the second-order cone programming (SOCP) constraint. The aim of the proposed coherent detection is to provide robustness against small-to-medium channel estimation mismatch (CEM) that may arise from channel frequency response estimation error in the MC-CDMA system, which is modulated by downlink binary phase-shift keying (BPSK) under colored noise. Noncoherent demodulation schemes are preferable to coherent demodulation schemes, as the latter are difficult to implement over time-varying fading channels. Differential phase-shift keying (DPSK) is therefore the natural choice for an alternative modulation scheme. Furthermore, the new blind differential SOCP-based ICA (SOCP-ICA) detection without channel estimation and compensation will be proposed to combat Doppler spread caused by time-varying fading channels in the DPSK-modulated MC-CDMA system under colored noise. In this paper, numerical simulations are used to illustrate the robustness of the proposed blind coherent SOCP-ICA detector against small-to-medium CEM and to emphasize the advantage of the blind differential SOCP-ICA detector in overcoming Doppler spread.

  11. A review and evaluation of research on the deaf-blind from perceptual, communicative, social and rehabilitative perspectives.

    PubMed

    Rönnberg, J; Borg, E

    2001-01-01

    This paper reviews research on deaf-blind individuals, primarily from behavioral and communicative points of view. Inclusion in the population of deaf-blind is qualified by describing a variety of subgroups and genetically based syndromes associated with deaf-blindness. Sensory assessment procedures--based primarily on residual capacities--are appraised. Consequences for everyday life are described briefly. Non-sensory, alternative classificatory schemes and procedures are presented and the results from behavior modification procedures used for correcting maladaptive behaviors are summarized. Methods for communicating tactilely are described and evaluated. Attention is also drawn to some suggestions regarding learning of alphabetic codes and sign acquisition. Finally, suggestions for future research are proposed.

  12. The algorithmic anatomy of model-based evaluation

    PubMed Central

    Daw, Nathaniel D.; Dayan, Peter

    2014-01-01

    Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realization of MB calculations, and the ways that this might be woven together with MF values and evaluation methods. There are as yet mostly only hints in the literature as to the resulting tapestry, so we offer more preview than review. PMID:25267820

  13. Evaluating the effect of alternative carbon allocation schemes in a land surface model (CLM4.5) on carbon fluxes, pools, and turnover in temperate forests

    NASA Astrophysics Data System (ADS)

    Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.; MacBean, Natasha; Alexander, M. Ross; Dye, Alex; Bishop, Daniel A.; Trouet, Valerie; Babst, Flurin; Hessl, Amy E.; Pederson, Neil; Blanken, Peter D.; Bohrer, Gil; Gough, Christopher M.; Litvak, Marcy E.; Novick, Kimberly A.; Phillips, Richard P.; Wood, Jeffrey D.; Moore, David J. P.

    2017-09-01

    How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocation schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.-iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m-2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m-2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C-LAI relationship in the model did not match the observed leaf C-LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic Cstem / Cleaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.

  14. Evaluating the effect of alternative carbon allocation schemes in a land surface model (CLM4.5) on carbon fluxes, pools, and turnover in temperate forests

    DOE PAGES

    Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.; ...

    2017-09-22

    How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocationmore » schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.–iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m -2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m -2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C–LAI relationship in the model did not match the observed leaf C–LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic C stem/C leaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.« less

  15. Evaluating the effect of alternative carbon allocation schemes in a land surface model (CLM4.5) on carbon fluxes, pools, and turnover in temperate forests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.

    How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocationmore » schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.–iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m -2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m -2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C–LAI relationship in the model did not match the observed leaf C–LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic C stem/C leaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.« less

  16. A class of the van Leer-type transport schemes and its application to the moisture transport in a general circulation model

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Chao, Winston C.; Sud, Y. C.; Walker, G. K.

    1994-01-01

    A generalized form of the second-order van Leer transport scheme is derived. Several constraints to the implied subgrid linear distribution are discussed. A very simple positive-definite scheme can be derived directly from the generalized form. A monotonic version of the scheme is applied to the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) for the moisture transport calculations, replacing the original fourth-order center-differencing scheme. Comparisons with the original scheme are made in idealized tests as well as in a summer climate simulation using the full GLA GCM. A distinct advantage of the monotonic transport scheme is its ability to transport sharp gradients without producing spurious oscillations and unphysical negative mixing ratio. Within the context of low-resolution climate simulations, the aforementioned characteristics are demonstrated to be very beneficial in regions where cumulus convection is active. The model-produced precipitation pattern using the new transport scheme is more coherently organized both in time and in space, and correlates better with observations. The side effect of the filling algorithm used in conjunction with the original scheme is also discussed, in the context of idealized tests. The major weakness of the proposed transport scheme with a local monotonic constraint is its substantial implicit diffusion at low resolution. Alternative constraints are discussed to counter this problem.

  17. Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.

    PubMed

    Yang, Shengxiang

    2008-01-01

    In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.

  18. Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Moroz, I.; Palmer, T.

    2015-12-01

    It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.

  19. Three-dimensional simulation of vortex breakdown

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Salas, M. D.

    1990-01-01

    The integral form of the complete, unsteady, compressible, three-dimensional Navier-Stokes equations in the conservation form, cast in generalized coordinate system, are solved, numerically, to simulate the vortex breakdown phenomenon. The inviscid fluxes are discretized using Roe's upwind-biased flux-difference splitting scheme and the viscous fluxes are discretized using central differencing. Time integration is performed using a backward Euler ADI (alternating direction implicit) scheme. A full approximation multigrid is used to accelerate the convergence to steady state.

  20. Comparison of the co-gasification of sewage sludge and food wastes and cost-benefit analysis of gasification- and incineration-based waste treatment schemes.

    PubMed

    You, Siming; Wang, Wei; Dai, Yanjun; Tong, Yen Wah; Wang, Chi-Hwa

    2016-10-01

    The compositions of food wastes and their co-gasification producer gas were compared with the existing data of sewage sludge. Results showed that food wastes are more favorable than sewage sludge for co-gasification based on residue generation and energy output. Two decentralized gasification-based schemes were proposed to dispose of the sewage sludge and food wastes in Singapore. Monte Carlo simulation-based cost-benefit analysis was conducted to compare the proposed schemes with the existing incineration-based scheme. It was found that the gasification-based schemes are financially superior to the incineration-based scheme based on the data of net present value (NPV), benefit-cost ratio (BCR), and internal rate of return (IRR). Sensitivity analysis was conducted to suggest effective measures to improve the economics of the schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. On multilevel RBF collocation to solve nonlinear PDEs arising from endogenous stochastic volatility models

    NASA Astrophysics Data System (ADS)

    Bastani, Ali Foroush; Dastgerdi, Maryam Vahid; Mighani, Abolfazl

    2018-06-01

    The main aim of this paper is the analytical and numerical study of a time-dependent second-order nonlinear partial differential equation (PDE) arising from the endogenous stochastic volatility model, introduced in [Bensoussan, A., Crouhy, M. and Galai, D., Stochastic equity volatility related to the leverage effect (I): equity volatility behavior. Applied Mathematical Finance, 1, 63-85, 1994]. As the first step, we derive a consistent set of initial and boundary conditions to complement the PDE, when the firm is financed by equity and debt. In the sequel, we propose a Newton-based iteration scheme for nonlinear parabolic PDEs which is an extension of a method for solving elliptic partial differential equations introduced in [Fasshauer, G. E., Newton iteration with multiquadrics for the solution of nonlinear PDEs. Computers and Mathematics with Applications, 43, 423-438, 2002]. The scheme is based on multilevel collocation using radial basis functions (RBFs) to solve the resulting locally linearized elliptic PDEs obtained at each level of the Newton iteration. We show the effectiveness of the resulting framework by solving a prototypical example from the field and compare the results with those obtained from three different techniques: (1) a finite difference discretization; (2) a naive RBF collocation and (3) a benchmark approximation, introduced for the first time in this paper. The numerical results confirm the robustness, higher convergence rate and good stability properties of the proposed scheme compared to other alternatives. We also comment on some possible research directions in this field.

  2. Plasmonic Antenna Coupling for QWIPs

    NASA Technical Reports Server (NTRS)

    Hong, John

    2007-01-01

    In a proposed scheme for coupling light into a quantum-well infrared photodetector (QWIP), an antenna or an array of antennas made of a suitable metal would be fabricated on the face of what would otherwise be a standard QWIP. This or any such coupling scheme is required to effect polarization conversion: Light incident perpendicularly to the face is necessarily polarized in the plane of the face, whereas, as a matter of fundamental electrodynamics and related quantum selection rules, light must have a non-zero component of perpendicular polarization in order to be absorbed in the photodetection process. In a prior coupling scheme, gratings in the form of surface corrugations diffract normally gles, thereby imparting some perpendicular polarization. Unfortunately, the corrugation- fabrication process increases the overall nonuniformity of a large QWIP array. The proposed scheme is an alternative to the use of surface corrugations.

  3. A framework to support decision making in the selection of sustainable drainage system design alternatives.

    PubMed

    Wang, Mingming; Sweetapple, Chris; Fu, Guangtao; Farmani, Raziyeh; Butler, David

    2017-10-01

    This paper presents a new framework for decision making in sustainable drainage system (SuDS) scheme design. It integrates resilience, hydraulic performance, pollution control, rainwater usage, energy analysis, greenhouse gas (GHG) emissions and costs, and has 12 indicators. The multi-criteria analysis methods of entropy weight and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) were selected to support SuDS scheme selection. The effectiveness of the framework is demonstrated with a SuDS case in China. Indicators used include flood volume, flood duration, a hydraulic performance indicator, cost and resilience. Resilience is an important design consideration, and it supports scheme selection in the case study. The proposed framework will help a decision maker to choose an appropriate design scheme for implementation without subjectivity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Measurement-device-independent quantum key distribution with multiple crystal heralded source with post-selection

    NASA Astrophysics Data System (ADS)

    Chen, Dong; Shang-Hong, Zhao; MengYi, Deng

    2018-03-01

    The multiple crystal heralded source with post-selection (MHPS), originally introduced to improve the single-photon character of the heralded source, has specific applications for quantum information protocols. In this paper, by combining decoy-state measurement-device-independent quantum key distribution (MDI-QKD) with spontaneous parametric downconversion process, we present a modified MDI-QKD scheme with MHPS where two architectures are proposed corresponding to symmetric scheme and asymmetric scheme. The symmetric scheme, which linked by photon switches in a log-tree structure, is adopted to overcome the limitation of the current low efficiency of m-to-1 optical switches. The asymmetric scheme, which shows a chained structure, is used to cope with the scalability issue with increase in the number of crystals suffered in symmetric scheme. The numerical simulations show that our modified scheme has apparent advances both in transmission distance and key generation rate compared to the original MDI-QKD with weak coherent source and traditional heralded source with post-selection. Furthermore, the recent advances in integrated photonics suggest that if built into a single chip, the MHPS might be a practical alternative source in quantum key distribution tasks requiring single photons to work.

  5. Dynamical noise filter and conditional entropy analysis in chaos synchronization.

    PubMed

    Wang, Jiao; Lai, C-H

    2006-06-01

    It is shown that, in a chaotic synchronization system whose driving signal is exposed to channel noise, the estimation of the drive system states can be greatly improved by applying the dynamical noise filtering to the response system states. If the noise is bounded in a certain range, the estimation errors, i.e., the difference between the filtered responding states and the driving states, can be made arbitrarily small. This property can be used in designing an alternative digital communication scheme. An analysis based on the conditional entropy justifies the application of dynamical noise filtering in generating quality synchronization.

  6. A satellite-based personal communication system for the 21st century

    NASA Technical Reports Server (NTRS)

    Sue, Miles K.; Dessouky, Khaled; Levitt, Barry; Rafferty, William

    1990-01-01

    Interest in personal communications (PCOMM) has been stimulated by recent developments in satellite and terrestrial mobile communications. A personal access satellite system (PASS) concept was developed at the Jet Propulsion Laboratory (JPL) which has many attractive user features, including service diversity and a handheld terminal. Significant technical challenges addressed in formulating the PASS space and ground segments are discussed. PASS system concept and basic design features, high risk enabling technologies, an optimized multiple access scheme, alternative antenna coverage concepts, the use of non-geostationary orbits, user terminal radiation constraints, and user terminal frequency reference are covered.

  7. Constraint Preserving Schemes Using Potential-Based Fluxes. I. Multidimensional Transport Equations (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    i,j−∆t nEni ,j , u∗∗i,j =u ∗ i,j−∆t nEni ,j , un+1i,j = 1 2 (uni,j+u ∗∗ i,j ). (2.26) An alternative first-order accurate genuinely multi-dimensional...time stepping is the ex- tended Lax-Friedrichs type time stepping, un+1i,j = 1 8 (4uni,j+u n i+1,j+u n i,j+1+u n i−1,j+u n i,j−1)−∆t nEni ,j . (2.27) 13

  8. n-body simulations using message passing parallel computers.

    NASA Astrophysics Data System (ADS)

    Grama, A. Y.; Kumar, V.; Sameh, A.

    The authors present new parallel formulations of the Barnes-Hut method for n-body simulations on message passing computers. These parallel formulations partition the domain efficiently incurring minimal communication overhead. This is in contrast to existing schemes that are based on sorting a large number of keys or on the use of global data structures. The new formulations are augmented by alternate communication strategies which serve to minimize communication overhead. The impact of these communication strategies is experimentally studied. The authors report on experimental results obtained from an astrophysical simulation on an nCUBE2 parallel computer.

  9. Economic evaluation of progeny-testing and genomic selection schemes for small-sized nucleus dairy cattle breeding programs in developing countries.

    PubMed

    Kariuki, C M; Brascamp, E W; Komen, H; Kahi, A K; van Arendonk, J A M

    2017-03-01

    In developing countries minimal and erratic performance and pedigree recording impede implementation of large-sized breeding programs. Small-sized nucleus programs offer an alternative but rely on their economic performance for their viability. We investigated the economic performance of 2 alternative small-sized dairy nucleus programs [i.e., progeny testing (PT) and genomic selection (GS)] over a 20-yr investment period. The nucleus was made up of 453 male and 360 female animals distributed in 8 non-overlapping age classes. Each year 10 active sires and 100 elite dams were selected. Populations of commercial recorded cows (CRC) of sizes 12,592 and 25,184 were used to produce test daughters in PT or to create a reference population in GS, respectively. Economic performance was defined as gross margins, calculated as discounted revenues minus discounted costs following a single generation of selection. Revenues were calculated as cumulative discounted expressions (CDE, kg) × 0.32 (€/kg of milk) × 100,000 (size commercial population). Genetic superiorities, deterministically simulated using pseudo-BLUP index and CDE, were determined using gene flow. Costs were for one generation of selection. Results show that GS schemes had higher cumulated genetic gain in the commercial cow population and higher gross margins compared with PT schemes. Gross margins were between 3.2- and 5.2-fold higher for GS, depending on size of the CRC population. The increase in gross margin was mostly due to a decreased generation interval and lower running costs in GS schemes. In PT schemes many bulls are culled before selection. We therefore also compared 2 schemes in which semen was stored instead of keeping live bulls. As expected, semen storage resulted in an increase in gross margins in PT schemes, but gross margins remained lower than those of GS schemes. We conclude that implementation of small-sized GS breeding schemes can be economically viable for developing countries. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

  10. Metal-mediated DNA base pairing: alternatives to hydrogen-bonded Watson-Crick base pairs.

    PubMed

    Takezawa, Yusuke; Shionoya, Mitsuhiko

    2012-12-18

    With its capacity to store and transfer the genetic information within a sequence of monomers, DNA forms its central role in chemical evolution through replication and amplification. This elegant behavior is largely based on highly specific molecular recognition between nucleobases through the specific hydrogen bonds in the Watson-Crick base pairing system. While the native base pairs have been amazingly sophisticated through the long history of evolution, synthetic chemists have devoted considerable efforts to create alternative base pairing systems in recent decades. Most of these new systems were designed based on the shape complementarity of the pairs or the rearrangement of hydrogen-bonding patterns. We wondered whether metal coordination could serve as an alternative driving force for DNA base pairing and why hydrogen bonding was selected on Earth in the course of molecular evolution. Therefore, we envisioned an alternative design strategy: we replaced hydrogen bonding with another important scheme in biological systems, metal-coordination bonding. In this Account, we provide an overview of the chemistry of metal-mediated base pairing including basic concepts, molecular design, characteristic structures and properties, and possible applications of DNA-based molecular systems. We describe several examples of artificial metal-mediated base pairs, such as Cu(2+)-mediated hydroxypyridone base pair, H-Cu(2+)-H (where H denotes a hydroxypyridone-bearing nucleoside), developed by us and other researchers. To design the metallo-base pairs we carefully chose appropriate combinations of ligand-bearing nucleosides and metal ions. As expected from their stronger bonding through metal coordination, DNA duplexes possessing metallo-base pairs exhibited higher thermal stability than natural hydrogen-bonded DNAs. Furthermore, we could also use metal-mediated base pairs to construct or induce other high-order structures. These features could lead to metal-responsive functional DNA molecules such as artificial DNAzymes and DNA machines. In addition, the metallo-base pairing system is a powerful tool for the construction of homogeneous and heterogeneous metal arrays, which can lead to DNA-based nanomaterials such as electronic wires and magnetic devices. Recently researchers have investigated these systems as enzyme replacements, which may offer an additional contribution to chemical biology and synthetic biology through the expansion of the genetic alphabet.

  11. Universality of bridge functions and its relation to variational perturbation theory and additivity of equations of state

    NASA Astrophysics Data System (ADS)

    Rosenfeld, Yaakov

    1984-05-01

    Featuring the modified hypernetted-chain (MHNC) scheme as a variational fitting procedure, we demonstrate that the accuracy of the variational perturbation theory (VPT) and of the method based on additivity of equations of state is determined by the excess entropy dependence of the bridge-function parameters [i.e., η(s) when the Percus-Yevick hard-sphere bridge functions are employed]. It is found that η(s) is nearly universal for all soft (i.e., "physical") potentials while it is distinctly different for the hard spheres, providing a graphical display of the "jump" in pair-potential space (with respect to accuracy of VPT) from "hard" to "soft" behavior. The universality of η(s) provides a local criterion for the MHNC scheme that should be useful for inverting structure-factor data in order to obtain the potential. An alternative local MHNC criterion due to Lado is rederived and extended, and it is also analyzed in light of the plot of η(s).

  12. A time delay controller for magnetic bearings

    NASA Technical Reports Server (NTRS)

    Youcef-Toumi, K.; Reddy, S.

    1991-01-01

    The control of systems with unknown dynamics and unpredictable disturbances has raised some challenging problems. This is particularly important when high system performance needs to be guaranteed at all times. Recently, the Time Delay Control has been suggested as an alternative control scheme. The proposed control system does not require an explicit plant model nor does it depend on the estimation of specific plant parameters. Rather, it combines adaptation with past observations to directly estimate the effect of the plant dynamics. A control law is formulated for a class of dynamic systems and a sufficient condition is presented for control systems stability. The derivation is based on the bounded input-bounded output stability approach using L sub infinity function norms. The control scheme is implemented on a five degrees of freedom high speed and high precision magnetic bearing. The control performance is evaluated using step responses, frequency responses, and disturbance rejection properties. The experimental data show an excellent control performance despite the system complexity.

  13. Report of the eRHIC Ring-Ring Working Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aschenauer, E. C.; Berg, S.; Blaskiewicz, M.

    2015-10-13

    This report evaluates the ring-ring option for eRHIC as a lower risk alternative to the linac-ring option. The reduced risk goes along with a reduced initial luminosity performance. However, a luminosity upgrade path is kept open. This upgrade path consists of two branches, with the ultimate upgrade being either a ring-ring or a linac-ring scheme. The linac-ring upgrade could be almost identical to the proposed linac-ring scheme, which is based on an ERL in the RHIC tunnel. This linac-ring version has been studied in great detail over the past ten years, and its significant risks are known. On the othermore » hand, no detailed work on an ultimate performance ring-ring scenario has been performed yet, other than the development of a consistent parameter set. Pursuing the ring-ring upgrade path introduces high risks and requires significant design work that is beyond the scope of this report.« less

  14. High-performance gap-closing vibrational energy harvesting using electret-polarized dielectric oscillators

    NASA Astrophysics Data System (ADS)

    Feng, Yue; Yu, Zejie; Han, Yanhui

    2018-01-01

    In conventional gap-closing electret-biased electrostatic energy harvesting (EEEH) schemes, electrets with a very low ratio of electret thickness to permittivity are in great demand to allow the attainment of high power output. However, in practice, pursuing such a low ratio introduces unwanted burdens on the electret stability and therefore the reliability of the EEEH devices. In this paper, we propose a dielectric-oscillator-based electrostatic EH (DEEH) scheme as an alternative approach to harvesting electret-biased electrostatic energy. This approach permits the fabrication of an electret-free closed EH circuit. The DEEH architecture directly collects the electrical energy exclusively through the oscillating dielectric body and thus completely circumvents the restrictions imposed by the electret parameters (thickness and permittivity) on power generation. Significantly, without considering the electret thickness and permittivity, both theoretical analysis and experiments have verified the effectiveness of this DEEH strategy, and a high figure of merit (on the order of 10-8 mW cm-2 V-2 Hz-1) was achieved for low-frequency movements.

  15. How Molecular Size Impacts RMSD Applications in Molecular Dynamics Simulations.

    PubMed

    Sargsyan, Karen; Grauffel, Cédric; Lim, Carmay

    2017-04-11

    The root-mean-square deviation (RMSD) is a similarity measure widely used in analysis of macromolecular structures and dynamics. As increasingly larger macromolecular systems are being studied, dimensionality effects such as the "curse of dimensionality" (a diminishing ability to discriminate pairwise differences between conformations with increasing system size) may exist and significantly impact RMSD-based analyses. For such large bimolecular systems, whether the RMSD or other alternative similarity measures might suffer from this "curse" and lose the ability to discriminate different macromolecular structures had not been explicitly addressed. Here, we show such dimensionality effects for both weighted and nonweighted RMSD schemes. We also provide a mechanism for the emergence of the "curse of dimensionality" for RMSD from the law of large numbers by showing that the conformational distributions from which RMSDs are calculated become increasingly similar as the system size increases. Our findings suggest the use of weighted RMSD schemes for small proteins (less than 200 residues) and nonweighted RMSD for larger proteins when analyzing molecular dynamics trajectories.

  16. An implict LU scheme for the Euler equations applied to arbitrary cascades. [new method of factoring

    NASA Technical Reports Server (NTRS)

    Buratynski, E. K.; Caughey, D. A.

    1984-01-01

    An implicit scheme for solving the Euler equations is derived and demonstrated. The alternating-direction implicit (ADI) technique is modified, using two implicit-operator factors corresponding to lower-block-diagonal (L) or upper-block-diagonal (U) algebraic systems which can be easily inverted. The resulting LU scheme is implemented in finite-volume mode and applied to 2D subsonic and transonic cascade flows with differing degrees of geometric complexity. The results are presented graphically and found to be in good agreement with those of other numerical and analytical approaches. The LU method is also 2.0-3.4 times faster than ADI, suggesting its value in calculating 3D problems.

  17. Improved diffusion Monte Carlo propagators for bosonic systems using Itô calculus

    NASA Astrophysics Data System (ADS)

    Hâkansson, P.; Mella, M.; Bressanini, Dario; Morosi, Gabriele; Patrone, Marta

    2006-11-01

    The construction of importance sampled diffusion Monte Carlo (DMC) schemes accurate to second order in the time step is discussed. A central aspect in obtaining efficient second order schemes is the numerical solution of the stochastic differential equation (SDE) associated with the Fokker-Plank equation responsible for the importance sampling procedure. In this work, stochastic predictor-corrector schemes solving the SDE and consistent with Itô calculus are used in DMC simulations of helium clusters. These schemes are numerically compared with alternative algorithms obtained by splitting the Fokker-Plank operator, an approach that we analyze using the analytical tools provided by Itô calculus. The numerical results show that predictor-corrector methods are indeed accurate to second order in the time step and that they present a smaller time step bias and a better efficiency than second order split-operator derived schemes when computing ensemble averages for bosonic systems. The possible extension of the predictor-corrector methods to higher orders is also discussed.

  18. On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models. [probability density function

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1992-01-01

    Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.

  19. Studies in integrated line-and packet-switched computer communication systems

    NASA Astrophysics Data System (ADS)

    Maglaris, B. S.

    1980-06-01

    The problem of efficiently allocating the bandwidth of a trunk to both types of traffic is handled for various system and traffic models. A performance analysis is carried out both for variable and fixed frame schemes. It is shown that variable frame schemes, adjusting the frame length according to the traffic variations, offer better trunk utilization at the cost of the additional hardware and software complexity needed because of the lack of synchronization. An optimization study on the fixed frame schemes follows. The problem of dynamically allocating the fixed frame to both types of traffic is formulated as a Markovian Decision process. It is shown that the movable boundary scheme, suggested for commercial implementations of integrated multiplexors, offers optimal or near optimal performance and simplicity of implementation. Finally, the behavior of the movable boundary integrated scheme is studied for tandem link connections. Under the assumptions made for the line-switched traffic, the forward allocation technique is found to offer the best alternative among different path set-up strategies.

  20. PMD compensation in multilevel coded-modulation schemes with coherent detection using BLAST algorithm and iterative polarization cancellation.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-15

    We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.

  1. Formation mechanism of thermally optimized Ga-doped MgZnO transparent conducting electrodes for GaN-based light-emitting diodes

    NASA Astrophysics Data System (ADS)

    Jang, Seon-Ho; Jo, Yong-Ryun; Lee, Young-Woong; Kim, Sei-Min; Kim, Bong-Joong; Bae, Jae-Hyun; An, Huei-Chun; Jang, Ja-Soon

    2015-05-01

    We report a highly transparent conducting electrode (TCE) scheme of MgxZn1-xO:Ga/Au/NiOx which was deposited on p-GaN by e-beam for GaN-based light emitting diodes (LEDs). The optical and electrical properties of the electrode were optimized by thermal annealing at 500°C for 1 minute in N2 + O2 (5:3) ambient. The light transmittance at the optimal condition increased up to 84-97% from the UV-A to yellow region. The specific contact resistance decreased to 4.3(±0.3) × 10-5 Ωcm2. The improved properties of the electrode were attributed to the directionally elongated crystalline nanostructures formed in the MgxZn1-xO:Ga layer which is compositionally uniform. Interestingly, the Au alloy nano-clusters created in the MgxZn1-xO:Ga layer during annealing at 500°C may also enhance the properties of the electrode by acting as a conducting bridge and a nano-sized mirror. Based on studies of the external quantum efficiency of blue LED devices, the proposed electrode scheme combined with an optimized annealing treatment suggests a potential alternative to ITO. [Figure not available: see fulltext.

  2. Patients’ Data Management System Protected by Identity-Based Authentication and Key Exchange

    PubMed Central

    Rivero-García, Alexandra; Santos-González, Iván; Hernández-Goya, Candelaria; Caballero-Gil, Pino; Yung, Moti

    2017-01-01

    A secure and distributed framework for the management of patients’ information in emergency and hospitalization services is proposed here in order to seek improvements in efficiency and security in this important area. In particular, confidentiality protection, mutual authentication, and automatic identification of patients are provided. The proposed system is based on two types of devices: Near Field Communication (NFC) wristbands assigned to patients, and mobile devices assigned to medical staff. Two other main elements of the system are an intermediate server to manage the involved data, and a second server with a private key generator to define the information required to protect communications. An identity-based authentication and key exchange scheme is essential to provide confidential communication and mutual authentication between the medical staff and the private key generator through an intermediate server. The identification of patients is carried out through a keyed-hash message authentication code. Thanks to the combination of the aforementioned tools, a secure alternative mobile health (mHealth) scheme for managing patients’ data is defined for emergency and hospitalization services. Different parts of the proposed system have been implemented, including mobile application, intermediate server, private key generator and communication channels. Apart from that, several simulations have been performed, and, compared with the current system, significant improvements in efficiency have been observed. PMID:28362328

  3. Patients' Data Management System Protected by Identity-Based Authentication and Key Exchange.

    PubMed

    Rivero-García, Alexandra; Santos-González, Iván; Hernández-Goya, Candelaria; Caballero-Gil, Pino; Yung, Moti

    2017-03-31

    A secure and distributed framework for the management of patients' information in emergency and hospitalization services is proposed here in order to seek improvements in efficiency and security in this important area. In particular, confidentiality protection, mutual authentication, and automatic identification of patients are provided. The proposed system is based on two types of devices: Near Field Communication (NFC) wristbands assigned to patients, and mobile devices assigned to medical staff. Two other main elements of the system are an intermediate server to manage the involved data, and a second server with a private key generator to define the information required to protect communications. An identity-based authentication and key exchange scheme is essential to provide confidential communication and mutual authentication between the medical staff and the private key generator through an intermediate server. The identification of patients is carried out through a keyed-hash message authentication code. Thanks to the combination of the aforementioned tools, a secure alternative mobile health (mHealth) scheme for managing patients' data is defined for emergency and hospitalization services. Different parts of the proposed system have been implemented, including mobile application, intermediate server, private key generator and communication channels. Apart from that, several simulations have been performed, and, compared with the current system, significant improvements in efficiency have been observed.

  4. Development of a Grid-Based Gyro-Kinetic Simulation Code

    NASA Astrophysics Data System (ADS)

    Lapillonne, Xavier; Brunetti, Maura; Tran, Trach-Minh; Brunner, Stephan

    2006-10-01

    A grid-based semi-Lagrangian code using cubic spline interpolation is being developed at CRPP, for solving the electrostatic drift-kinetic equations [M. Brunetti et. al, Comp. Phys. Comm. 163, 1 (2004)] in a cylindrical system. This 4-dim code, CYGNE, is part of a project with long term aim of studying microturbulence in toroidal fusion devices, in the more general frame of gyro-kinetic equations. Towards their non-linear phase, the simulations from this code are subject to significant overshoot problems, reflected by the development of negative value regions of the distribution function, which leads to bad energy conservation. This has motivated the study of alternative schemes. On the one hand, new time integration algorithms are considered in the semi-Lagrangian frame. On the other hand, fully Eulerian schemes, which separate time and space discretisation (method of lines), are investigated. In particular, the Essentially Non Oscillatory (ENO) approach, constructed so as to minimize the overshoot problem, has been considered. All these methods have first been tested in the simpler case of the 2-dim guiding-center model for the Kelvin-Helmholtz instability, which enables to address the specific issue of the E xB drift also met in the more complex gyrokinetic-type equations. Based on these preliminary studies, the most promising methods are being implemented and tested in CYGNE.

  5. Report on Pairing-based Cryptography.

    PubMed

    Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily

    2015-01-01

    This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST's position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed.

  6. Report on Pairing-based Cryptography

    PubMed Central

    Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily

    2015-01-01

    This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST’s position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed. PMID:26958435

  7. A discrete-time chaos synchronization system for electronic locking devices

    NASA Astrophysics Data System (ADS)

    Minero-Ramales, G.; López-Mancilla, D.; Castañeda, Carlos E.; Huerta Cuellar, G.; Chiu Z., R.; Hugo García López, J.; Jaimes Reátegui, R.; Villafaña Rauda, E.; Posadas-Castillo, C.

    2016-11-01

    This paper presents a novel electronic locking key based on discrete-time chaos synchronization. Two Chen chaos generators are synchronized using the Model-Matching Approach, from non-linear control theory, in order to perform the encryption/decryption of the signal to be transmitted. A model/transmitter system is designed, generating a key of chaotic pulses in discrete-time. A plant/receiver system uses the above mentioned key to unlock the mechanism. Two alternative schemes to transmit the private chaotic key are proposed. The first one utilizes two transmission channels. One channel is used to encrypt the chaotic key and the other is used to achieve output synchronization. The second alternative uses only one transmission channel for obtaining synchronization and encryption of the chaotic key. In both cases, the private chaotic key is encrypted again with chaos to solve secure communication-related problems. The results obtained via simulations contribute to enhance the electronic locking devices.

  8. “Mirror, mirror, on the wall, who in this land is fairest of all?” – Distributional sensitivity in the measurement of socioeconomic inequality of health

    PubMed Central

    ERREYGERS, Guido; CLARKE, Philip; VAN OURTI, Tom

    2012-01-01

    This paper explores four alternative indices for measuring health inequalities in a way that takes into account attitudes towards inequality. First, we revisit the extended concentration index which has been proposed to make it possible to introduce changes into the distributional value judgements implicit in the standard concentration index. Next, we suggest an alternative index based on a different weighting scheme. In contrast to the extended concentration index, this new index has the ‘symmetry’ property. We also show how these indices can be generalized so that they satisfy the ‘mirror’ property, which may be seen as a desirable property when dealing with bounded variables. We compare the different indices empirically for under-five mortality rates and the number of antenatal visits in developing countries. PMID:22204878

  9. Explodator: A new skeleton mechanism for the halate driven chemical oscillators

    NASA Astrophysics Data System (ADS)

    Noszticzius, Z.; Farkas, H.; Schelly, Z. A.

    1984-06-01

    In the first part of this work, some shortcomings in the present theories of the Belousov-Zhabotinskii oscillating reaction are discussed. In the second part, a new oscillatory scheme, the limited Explodator, is proposed as an alternative skeleton mechanism. This model contains an always unstable three-variable Lotka-Volterra core (the ``Explodator'') and a stabilizing limiting reaction. The new scheme exhibits Hopf bifurcation and limit cycle oscillations. Finally, some possibilities and problems of a generalization are mentioned.

  10. Concurrent hyperthermia estimation schemes based on extended Kalman filtering and reduced-order modelling.

    PubMed

    Potocki, J K; Tharp, H S

    1993-01-01

    The success of treating cancerous tissue with heat depends on the temperature elevation, the amount of tissue elevated to that temperature, and the length of time that the tissue temperature is elevated. In clinical situations the temperature of most of the treated tissue volume is unknown, because only a small number of temperature sensors can be inserted into the tissue. A state space model based on a finite difference approximation of the bioheat transfer equation (BHTE) is developed for identification purposes. A full-order extended Kalman filter (EKF) is designed to estimate both the unknown blood perfusion parameters and the temperature at unmeasured locations. Two reduced-order estimators are designed as computationally less intensive alternatives to the full-order EKF. Simulation results show that the success of the estimation scheme depends strongly on the number and location of the temperature sensors. Superior results occur when a temperature sensor exists in each unknown blood perfusion zone, and the number of sensors is at least as large as the number of unknown perfusion zones. Unacceptable results occur when there are more unknown perfusion parameters than temperature sensors, or when the sensors are placed in locations that do not sample the unknown perfusion information.

  11. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

  12. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks

    PubMed Central

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-01-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784

  13. Censored quantile regression with recursive partitioning-based weights

    PubMed Central

    Wey, Andrew; Wang, Lan; Rudser, Kyle

    2014-01-01

    Censored quantile regression provides a useful alternative to the Cox proportional hazards model for analyzing survival data. It directly models the conditional quantile of the survival time and hence is easy to interpret. Moreover, it relaxes the proportionality constraint on the hazard function associated with the popular Cox model and is natural for modeling heterogeneity of the data. Recently, Wang and Wang (2009. Locally weighted censored quantile regression. Journal of the American Statistical Association 103, 1117–1128) proposed a locally weighted censored quantile regression approach that allows for covariate-dependent censoring and is less restrictive than other censored quantile regression methods. However, their kernel smoothing-based weighting scheme requires all covariates to be continuous and encounters practical difficulty with even a moderate number of covariates. We propose a new weighting approach that uses recursive partitioning, e.g. survival trees, that offers greater flexibility in handling covariate-dependent censoring in moderately high dimensions and can incorporate both continuous and discrete covariates. We prove that this new weighting scheme leads to consistent estimation of the quantile regression coefficients and demonstrate its effectiveness via Monte Carlo simulations. We also illustrate the new method using a widely recognized data set from a clinical trial on primary biliary cirrhosis. PMID:23975800

  14. Suppression of Lateral Diffusion and Surface Leakage Currents in nBn Photodetectors Using an Inverted Design

    NASA Astrophysics Data System (ADS)

    Du, X.; Savich, G. R.; Marozas, B. T.; Wicks, G. W.

    2018-02-01

    Surface leakage and lateral diffusion currents in InAs-based nBn photodetectors have been investigated. Devices fabricated using a shallow etch processing scheme that etches through the top contact and stops at the barrier exhibited large lateral diffusion current but undetectably low surface leakage. Such large lateral diffusion current significantly increased the dark current, especially in small devices, and causes pixel-to-pixel crosstalk in detector arrays. To eliminate the lateral diffusion current, two different approaches were examined. The conventional solution utilized a deep etch process, which etches through the top contact, barrier, and absorber. This deep etch processing scheme eliminated lateral diffusion, but introduced high surface current along the device mesa sidewalls, increasing the dark current. High device failure rate was also observed in deep-etched nBn structures. An alternative approach to limit lateral diffusion used an inverted nBn structure that has its absorber grown above the barrier. Like the shallow etch process on conventional nBn structures, the inverted nBn devices were fabricated with a processing scheme that only etches the top layer (the absorber, in this case) but avoids etching through the barrier. The results show that inverted nBn devices have the advantage of eliminating the lateral diffusion current without introducing elevated surface current.

  15. Data multiplexing in radio interferometric calibration

    NASA Astrophysics Data System (ADS)

    Yatawatta, Sarod; Diblen, Faruk; Spreeuw, Hanno; Koopmans, L. V. E.

    2018-03-01

    New and upcoming radio interferometers will produce unprecedented amount of data that demand extremely powerful computers for processing. This is a limiting factor due to the large computational power and energy costs involved. Such limitations restrict several key data processing steps in radio interferometry. One such step is calibration where systematic errors in the data are determined and corrected. Accurate calibration is an essential component in reaching many scientific goals in radio astronomy and the use of consensus optimization that exploits the continuity of systematic errors across frequency significantly improves calibration accuracy. In order to reach full consensus, data at all frequencies need to be calibrated simultaneously. In the SKA regime, this can become intractable if the available compute agents do not have the resources to process data from all frequency channels simultaneously. In this paper, we propose a multiplexing scheme that is based on the alternating direction method of multipliers with cyclic updates. With this scheme, it is possible to simultaneously calibrate the full data set using far fewer compute agents than the number of frequencies at which data are available. We give simulation results to show the feasibility of the proposed multiplexing scheme in simultaneously calibrating a full data set when a limited number of compute agents are available.

  16. A displacement-based finite element formulation for incompressible and nearly-incompressible cardiac mechanics

    PubMed Central

    Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P.; Nordsletten, David A.

    2014-01-01

    The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii–Newton–Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics. PMID:25187672

  17. A displacement-based finite element formulation for incompressible and nearly-incompressible cardiac mechanics.

    PubMed

    Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P; Nordsletten, David A

    2014-06-01

    The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii-Newton-Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics.

  18. A secure smart-card based authentication and key agreement scheme for telecare medicine information systems.

    PubMed

    Lee, Tian-Fu; Liu, Chuan-Ming

    2013-06-01

    A smart-card based authentication scheme for telecare medicine information systems enables patients, doctors, nurses, health visitors and the medicine information systems to establish a secure communication platform through public networks. Zhu recently presented an improved authentication scheme in order to solve the weakness of the authentication scheme of Wei et al., where the off-line password guessing attacks cannot be resisted. This investigation indicates that the improved scheme of Zhu has some faults such that the authentication scheme cannot execute correctly and is vulnerable to the attack of parallel sessions. Additionally, an enhanced authentication scheme based on the scheme of Zhu is proposed. The enhanced scheme not only avoids the weakness in the original scheme, but also provides users' anonymity and authenticated key agreements for secure data communications.

  19. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme.

    PubMed

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.

  20. ID-based encryption scheme with revocation

    NASA Astrophysics Data System (ADS)

    Othman, Hafizul Azrie; Ismail, Eddie Shahril

    2017-04-01

    In 2015, Meshram proposed an efficient ID-based cryptographic encryption based on the difficulty of solving discrete logarithm and integer-factoring problems. The scheme was pairing free and claimed to be secure against adaptive chosen plaintext attacks (CPA). Later, Tan et al. proved that the scheme was insecure by presenting a method to recover the secret master key and to obtain prime factorization of modulo n. In this paper, we propose a new pairing-free ID-based encryption scheme with revocation based on Meshram's ID-based encryption scheme, which is also secure against Tan et al.'s attacks.

  1. A secure biometrics-based authentication scheme for telecare medicine information systems.

    PubMed

    Yan, Xiaopeng; Li, Weiheng; Li, Ping; Wang, Jiantao; Hao, Xinhong; Gong, Peng

    2013-10-01

    The telecare medicine information system (TMIS) allows patients and doctors to access medical services or medical information at remote sites. Therefore, it could bring us very big convenient. To safeguard patients' privacy, authentication schemes for the TMIS attracted wide attention. Recently, Tan proposed an efficient biometrics-based authentication scheme for the TMIS and claimed their scheme could withstand various attacks. However, in this paper, we point out that Tan's scheme is vulnerable to the Denial-of-Service attack. To enhance security, we also propose an improved scheme based on Tan's work. Security and performance analysis shows our scheme not only could overcome weakness in Tan's scheme but also has better performance.

  2. A hydrological emulator for global applications – HE v1.0.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yaling; Hejazi, Mohamad; Li, Hongyi

    While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluatedmore » in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling–Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Lastly, our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.« less

  3. Sharing Resources In Mobile/Satellite Communications

    NASA Technical Reports Server (NTRS)

    Yan, Tsun-Yee; Sue, Miles K.

    1992-01-01

    Report presents preliminary theoretical analysis of several alternative schemes for allocation of satellite resource among terrestrial subscribers of landmobile/satellite communication system. Demand-access and random-access approaches under code-division and frequency-division concepts compared.

  4. Noncontrast magnetic resonance angiography of the hand: improved arterial conspicuity by multidirectional flow-sensitive dephasing magnetization preparation in 3D balanced steady-state free precession imaging.

    PubMed

    Fan, Zhaoyang; Hodnett, Philip A; Davarpanah, Amir H; Scanlon, Timothy G; Sheehan, John J; Varga, John; Carr, James C; Li, Debiao

    2011-08-01

    : To develop a flow-sensitive dephasing (FSD) preparative scheme to facilitate multidirectional flow-signal suppression in 3-dimensional balanced steady-state free precession imaging and to validate the feasibility of the refined sequence for noncontrast magnetic resonance angiography (NC-MRA) of the hand. : A new FSD preparative scheme was developed that combines 2 conventional FSD modules. Studies using a flow phantom (gadolinium-doped water 15 cm/s) and the hands of 11 healthy volunteers (6 males and 5 females) were performed to compare the proposed FSD scheme with its conventional counterpart with respect to the signal suppression of multidirectional flow. In 9 of the 11 healthy subjects and 2 patients with suspected vasculitis and documented Raynaud phenomenon, respectively, 3-dimensional balanced steady-state free precession imaging coupled with the new FSD scheme was compared with spatial-resolution-matched (0.94 × 0.94 × 0.94 mm) contrast-enhanced magnetic resonance angiography (0.15 mmol/kg gadopentetate dimeglumine) in terms of overall image quality, venous contamination, motion degradation, and arterial conspicuity. : The proposed FSD scheme was able to suppress 2-dimensional flow signal in the flow phantom and hands and yielded significantly higher arterial conspicuity scores than the conventional scheme did on NC-MRA at the regions of common digitals and proper digitals. Compared with contrast-enhanced magnetic resonance angiography, the refined NC-MRA technique yielded comparable overall image quality and motion degradation, significantly less venous contamination, and significantly higher arterial conspicuity score at digital arteries. : The FSD-based NC-MRA technique is improved in the depiction of multidirectional flow by applying a 2-module FSD preparation, which enhances its potential to serve as an alternative magnetic resonance angiography technique for the assessment of hand vascular abnormalities.

  5. Linking payment to health outcomes: a taxonomy and examination of performance-based reimbursement schemes between healthcare payers and manufacturers.

    PubMed

    Carlson, Josh J; Sullivan, Sean D; Garrison, Louis P; Neumann, Peter J; Veenstra, David L

    2010-08-01

    To identify, categorize and examine performance-based health outcomes reimbursement schemes for medical technology. We performed a review of performance-based health outcomes reimbursement schemes over the past 10 years (7/98-010/09) using publicly available databases, web and grey literature searches, and input from healthcare reimbursement experts. We developed a taxonomy of scheme types by inductively organizing the schemes identified according to the timing, execution, and health outcomes measured in the schemes. Our search yielded 34 coverage with evidence development schemes, 10 conditional treatment continuation schemes, and 14 performance-linked reimbursement schemes. The majority of schemes are in Europe and Australia, with an increasing number in Canada and the U.S. These schemes have the potential to alter the reimbursement and pricing landscape for medical technology, but significant challenges, including high transaction costs and insufficient information systems, may limit their long-term impact. Future studies regarding experiences and outcomes of implemented schemes are necessary. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  6. Recombination monitor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S. Y.; Blaskiewicz, M.

    This is a brief report on LEReC recombination monitor design considerations. The recombination produced Au 78+ ion rate is reviewed. Based on this two designs are discussed. One is to use the large dispersion lattice. It is shown that even with the large separation of the Au 78+ beam from the Au 79+ beam, the continued monitoring of the recombination is not possible. Accumulation of Au 78+ ions is needed, plus collimation of the Au79+ beam. In another design, it is shown that the recombination monitor can be built based on the proposed scheme with the nominal lattice. From machinemore » operation point of view, this design is preferable. Finally, possible studies and the alternative strategies with the basic goal of the monitor are discussed.« less

  7. Data Gathering and Energy Transfer Dilemma in UAV-Assisted Flying Access Network for IoT.

    PubMed

    Arabi, Sara; Sabir, Essaid; Elbiaze, Halima; Sadik, Mohamed

    2018-05-11

    Recently, Unmanned Aerial Vehicles (UAVs) have emerged as an alternative solution to assist wireless networks, thanks to numerous advantages they offer in comparison to terrestrial fixed base stations. For instance, a UAV can be used to embed a flying base station providing an on-demand nomadic access to network services. A UAV can also be used to wirelessly recharge out-of-battery ground devices. In this paper, we aim to deal with both data collection and recharging depleted ground Internet-of-Things (IoT) devices through a UAV station used as a flying base station. To extend the network lifetime, we present a novel use of UAV with energy harvesting module and wireless recharging capabilities. However, the UAV is used as an energy source to empower depleted IoT devices. On one hand, the UAV charges depleted ground IoT devices under three policies: (1) low-battery first scheme; (2) high-battery first scheme; and (3) random scheme. On the other hand, the UAV station collects data from IoT devices that have sufficient energy to transmit their packets, and in the same phase, the UAV exploits the Radio Frequency (RF) signals transmitted by IoT devices to extract and harvest energy. Furthermore, and as the UAV station has a limited coverage time due to its energy constraints, we propose and investigate an efficient trade-off between ground users recharging time and data gathering time. Furthermore, we suggest to control and optimize the UAV trajectory in order to complete its travel within a minimum time, while minimizing the energy spent and/or enhancing the network lifetime. Extensive numerical results and simulations show how the system behaves under different scenarios and using various metrics in which we examine the added value of UAV with energy harvesting module.

  8. A classification scheme for alternative oxidases reveals the taxonomic distribution and evolutionary history of the enzyme in angiosperms.

    PubMed

    Costa, José Hélio; McDonald, Allison E; Arnholdt-Schmitt, Birgit; Fernandes de Melo, Dirce

    2014-11-01

    A classification scheme based on protein phylogenies and sequence harmony method was used to clarify the taxonomic distribution and evolutionary history of the alternative oxidase (AOX) in angiosperms. A large data set analyses showed that AOX1 and AOX2 subfamilies were distributed into 4 phylogenetic clades: AOX1a-c/1e, AOX1d, AOX2a-c and AOX2d. High diversity in AOX family compositions was found. While the AOX2 subfamily was not detected in monocots, the AOX1 subfamily has expanded (AOX1a-e) in the large majority of these plants. In addition, Poales AOX1b and 1d were orthologous to eudicots AOX1d and then renamed as AOX1d1 and 1d2. AOX1 or AOX2 losses were detected in some eudicot plants. Several AOX2 duplications (AOX2a-c) were identified in eudicot species, mainly in the asterids. The AOX2b originally identified in eudicots in the Fabales order (soybean, cowpea) was divergent from AOX2a-c showing some specific amino acids with AOX1d and then it was renamed as AOX2d. AOX1d and AOX2d seem to be stress-responsive, facultative and mutually exclusive among species suggesting a complementary role with an AOX1(a) in stress conditions. Based on the data collected, we present a model for the evolutionary history of AOX in angiosperms and highlight specific areas where further research would be most beneficial. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. An European inter-laboratory validation of alternative endpoints of the murine local lymph node assay: 2nd round.

    PubMed

    Ehling, G; Hecht, M; Heusener, A; Huesler, J; Gamer, A O; van Loveren, H; Maurer, Th; Riecke, K; Ullmann, L; Ulrich, P; Vandebriel, R; Vohr, H-W

    2005-08-15

    The original local lymph node assay (LLNA) is based on the use of radioactive labelling to measure cell proliferation. Other endpoints for the assessment of proliferation are also authorized by the OECD Guideline 429 provided there is appropriate scientific support, including full citations and description of the methodology (OECD, 2002. OECD Guideline for the Testing of Chemicals; Skin Sensitization: Local Lymph Node Assay, Guideline 429. Paris, adopted 24th April 2002.). Here, we describe the outcome of the second round of an inter-laboratory validation of alternative endpoints in the LLNA conducted in nine laboratories in Europe. The validation study was managed and supervised by the Swiss Agency for Therapeutic Products (Swissmedic) in Bern. Ear-draining lymph node (LN) weight and cell counts were used to assess LN cell proliferation instead of [3H]TdR incorporation. In addition, the acute inflammatory skin reaction was measured by ear weight determination of circular biopsies of the ears to identify skin irritation properties of the test items. The statistical analysis was performed in the department of statistics at the university of Bern. Similar to the EC(3) values defined for the radioactive method, threshold values were calculated for the endpoints measured in this modification of the LLNA. It was concluded that all parameters measured have to be taken into consideration for the categorisation of compounds due to their sensitising potencies. Therefore, an assessment scheme has been developed which turned out to be of great importance to consistently assess sensitisation versus irritancy based on the data of the different parameters. In contrast to the radioactive method, irritants have been picked up by all the laboratories applying this assessment scheme.

  10. Full duplex fiber link for alternative wired and wireless access based on SSB optical millimeter-wave with 4-PAM signal

    NASA Astrophysics Data System (ADS)

    Ma, Jianxin; Zhang, Junjie

    2015-03-01

    A novel full-duplex fiber-wireless link based on single sideband (SSB) optical millimeter (mm)-wave with 10 Gbit/s 4-pulse amplitude modulation (PAM) signal is proposed to provide alternative wired and 40 GHz wireless accesses for the user terminals. The SSB optical mm-wave with 4-PAM signal consists of two tones: one bears the 4-PAM signal and the other is unmodulated with high power. After transmission over the fiber to the hybrid optical network unit (HONU), the SSB optical mm-wave signal can be decomposed by fiber Bragg gratings (FBGs) as the SSB optical mm-wave signal with reduced carrier-to-sideband ratio (the baseband 4-PAM optical signal) and the uplink optical carrier for the wireless (wired) access. This makes the HONU free from the laser source. For the uplink, since the wireless access signal is converted to the baseband by power detection, both the transmitter in the HONU and the receiver in optical line terminal (OLT) are co-shared for both wireless and wired accesses, which makes the full duplex link much simpler. In our scheme, the optical electrical field of the square-root increment level 4-PAM signal assures an equal level spacing receiving for both the downlink wired and wireless accesses. Since the downlink wireless signal is down-converted to the baseband by power detection, RF local oscillator is unnecessary. To confirm the feasibility of our proposed scheme, a simulation full duplex link with 40 GHz SSB optical mm-wave with 10 Gbit/s 4-PAM signal is built. The simulation results show that both down- and up-links for either wired or wireless access can keep good performance even if the link length of the SSMF is extended to 40 km.

  11. Insured persons dilemma about other family members: a perspective on the national health insurance scheme in Nigeria.

    PubMed

    Umar, Nasir; Mohammed, Shafiu

    2011-09-05

    The need for health care reforms and alternative financing mechanism in many low and middle-income countries has been advocated. This led to the introduction of the national health insurance scheme (NHIS) in Nigeria, at first with the enrollment of formal sector employees. A qualitative study was conducted to assess enrollee's perception on the quality of health care before and after enrollment. Initial results revealed that respondents (heads of households) have generally viewed the NHIS favorably, but consistently expressed dissatisfaction over the terms of coverage. Specifically, because the NHIS enrollment covers only the primary insured person, their spouse and only up to four biological children (child defined as <18 years of age), in a setting where extended family is common. Dissatisfaction of enrollees could affect their willingness to participate in the insurance scheme, which may potentially affect the success and future extension of the scheme.

  12. All-Particle Multiscale Computation of Hypersonic Rarefied Flow

    NASA Astrophysics Data System (ADS)

    Jun, E.; Burt, J. M.; Boyd, I. D.

    2011-05-01

    This study examines a new hybrid particle scheme used as an alternative means of multiscale flow simulation. The hybrid particle scheme employs the direct simulation Monte Carlo (DSMC) method in rarefied flow regions and the low diffusion (LD) particle method in continuum flow regions. The numerical procedures of the low diffusion particle method are implemented within an existing DSMC algorithm. The performance of the LD-DSMC approach is assessed by studying Mach 10 nitrogen flow over a sphere with a global Knudsen number of 0.002. The hybrid scheme results show good overall agreement with results from standard DSMC and CFD computation. Subcell procedures are utilized to improve computational efficiency and reduce sensitivity to DSMC cell size in the hybrid scheme. This makes it possible to perform the LD-DSMC simulation on a much coarser mesh that leads to a significant reduction in computation time.

  13. Fast viscosity solutions for shape from shading under a more realistic imaging model

    NASA Astrophysics Data System (ADS)

    Wang, Guohui; Han, Jiuqiang; Jia, Honghai; Zhang, Xinman

    2009-11-01

    Shape from shading (SFS) has been a classical and important problem in the domain of computer vision. The goal of SFS is to reconstruct the 3-D shape of an object from its 2-D intensity image. To this end, an image irradiance equation describing the relation between the shape of a surface and its corresponding brightness variations is used. Then it is derived as an explicit partial differential equation (PDE). Using the nonlinear programming principle, we propose a detailed solution to Prados and Faugeras's implicit scheme for approximating the viscosity solution of the resulting PDE. Furthermore, by combining implicit and semi-implicit schemes, a new approximation scheme is presented. In order to accelerate the convergence speed, we adopt the Gauss-Seidel idea and alternating sweeping strategy to the approximation schemes. Experimental results on both synthetic and real images are performed to demonstrate that the proposed methods are fast and accurate.

  14. Understanding security failures of two authentication and key agreement schemes for telecare medicine information systems.

    PubMed

    Mishra, Dheerendra

    2015-03-01

    Smart card based authentication and key agreement schemes for telecare medicine information systems (TMIS) enable doctors, nurses, patients and health visitors to use smart cards for secure login to medical information systems. In recent years, several authentication and key agreement schemes have been proposed to present secure and efficient solution for TMIS. Most of the existing authentication schemes for TMIS have either higher computation overhead or are vulnerable to attacks. To reduce the computational overhead and enhance the security, Lee recently proposed an authentication and key agreement scheme using chaotic maps for TMIS. Xu et al. also proposed a password based authentication and key agreement scheme for TMIS using elliptic curve cryptography. Both the schemes provide better efficiency from the conventional public key cryptography based schemes. These schemes are important as they present an efficient solution for TMIS. We analyze the security of both Lee's scheme and Xu et al.'s schemes. Unfortunately, we identify that both the schemes are vulnerable to denial of service attack. To understand the security failures of these cryptographic schemes which are the key of patching existing schemes and designing future schemes, we demonstrate the security loopholes of Lee's scheme and Xu et al.'s scheme in this paper.

  15. Robust and efficient biometrics based password authentication scheme for telecare medicine information systems using extended chaotic maps.

    PubMed

    Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Xie, Dong; Yang, Yixian

    2015-06-01

    The Telecare Medicine Information Systems (TMISs) provide an efficient communicating platform supporting the patients access health-care delivery services via internet or mobile networks. Authentication becomes an essential need when a remote patient logins into the telecare server. Recently, many extended chaotic maps based authentication schemes using smart cards for TMISs have been proposed. Li et al. proposed a secure smart cards based authentication scheme for TMISs using extended chaotic maps based on Lee's and Jiang et al.'s scheme. In this study, we show that Li et al.'s scheme has still some weaknesses such as violation the session key security, vulnerability to user impersonation attack and lack of local verification. To conquer these flaws, we propose a chaotic maps and smart cards based password authentication scheme by applying biometrics technique and hash function operations. Through the informal and formal security analyses, we demonstrate that our scheme is resilient possible known attacks including the attacks found in Li et al.'s scheme. As compared with the previous authentication schemes, the proposed scheme is more secure and efficient and hence more practical for telemedical environments.

  16. Application of effective discharge analysis to environmental flow decision-making

    USGS Publications Warehouse

    McKay, S. Kyle; Freeman, Mary C.; Covich, A.P.

    2016-01-01

    Well-informed river management decisions rely on an explicit statement of objectives, repeatable analyses, and a transparent system for assessing trade-offs. These components may then be applied to compare alternative operational regimes for water resource infrastructure (e.g., diversions, locks, and dams). Intra- and inter-annual hydrologic variability further complicates these already complex environmental flow decisions. Effective discharge analysis (developed in studies of geomorphology) is a powerful tool for integrating temporal variability of flow magnitude and associated ecological consequences. Here, we adapt the effectiveness framework to include multiple elements of the natural flow regime (i.e., timing, duration, and rate-of-change) as well as two flow variables. We demonstrate this analytical approach using a case study of environmental flow management based on long-term (60 years) daily discharge records in the Middle Oconee River near Athens, GA, USA. Specifically, we apply an existing model for estimating young-of-year fish recruitment based on flow-dependent metrics to an effective discharge analysis that incorporates hydrologic variability and multiple focal taxa. We then compare three alternative methods of environmental flow provision. Percentage-based withdrawal schemes outcompete other environmental flow methods across all levels of water withdrawal and ecological outcomes.

  17. Application of Effective Discharge Analysis to Environmental Flow Decision-Making.

    PubMed

    McKay, S Kyle; Freeman, Mary C; Covich, Alan P

    2016-06-01

    Well-informed river management decisions rely on an explicit statement of objectives, repeatable analyses, and a transparent system for assessing trade-offs. These components may then be applied to compare alternative operational regimes for water resource infrastructure (e.g., diversions, locks, and dams). Intra- and inter-annual hydrologic variability further complicates these already complex environmental flow decisions. Effective discharge analysis (developed in studies of geomorphology) is a powerful tool for integrating temporal variability of flow magnitude and associated ecological consequences. Here, we adapt the effectiveness framework to include multiple elements of the natural flow regime (i.e., timing, duration, and rate-of-change) as well as two flow variables. We demonstrate this analytical approach using a case study of environmental flow management based on long-term (60 years) daily discharge records in the Middle Oconee River near Athens, GA, USA. Specifically, we apply an existing model for estimating young-of-year fish recruitment based on flow-dependent metrics to an effective discharge analysis that incorporates hydrologic variability and multiple focal taxa. We then compare three alternative methods of environmental flow provision. Percentage-based withdrawal schemes outcompete other environmental flow methods across all levels of water withdrawal and ecological outcomes.

  18. Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps.

    PubMed

    Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han; Lin, Tsung-Hung

    2017-01-01

    A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes.

  19. Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps

    PubMed Central

    Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han

    2017-01-01

    A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes. PMID:28759615

  20. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme

    PubMed Central

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.’s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks. PMID:26866606

  1. An efficient and provable secure revocable identity-based encryption scheme.

    PubMed

    Wang, Changji; Li, Yuan; Xia, Xiaonan; Zheng, Kangjia

    2014-01-01

    Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE) has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.

  2. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage

    PubMed Central

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model. PMID:27898703

  3. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage.

    PubMed

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model.

  4. SIMULATING ATMOSPHERIC EXPOSURE USING AN INNOVATIVE METEOROLOGICAL SAMPLING SCHEME

    EPA Science Inventory

    Multimedia Risk assessments require the temporal integration of atmospheric concentration and deposition estimates with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-ter...

  5. Use of uninformative priors to initialize state estimation for dynamical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.

    2017-10-01

    The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.

  6. A provably-secure ECC-based authentication scheme for wireless sensor networks.

    PubMed

    Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho

    2014-11-06

    A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes.

  7. A Provably-Secure ECC-Based Authentication Scheme for Wireless Sensor Networks

    PubMed Central

    Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho

    2014-01-01

    A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes. PMID:25384009

  8. A soft-hard combination-based cooperative spectrum sensing scheme for cognitive radio networks.

    PubMed

    Do, Nhu Tri; An, Beongku

    2015-02-13

    In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT)-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of -15 dB). In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.

  9. A channel dynamics model for real-time flood forecasting

    USGS Publications Warehouse

    Hoos, Anne B.; Koussis, Antonis D.; Beale, Guy O.

    1989-01-01

    A new channel dynamics scheme (alternative system predictor in real time (ASPIRE)), designed specifically for real-time river flow forecasting, is introduced to reduce uncertainty in the forecast. ASPIRE is a storage routing model that limits the influence of catchment model forecast errors to the downstream station closest to the catchment. Comparisons with the Muskingum routing scheme in field tests suggest that the ASPIRE scheme can provide more accurate forecasts, probably because discharge observations are used to a maximum advantage and routing reaches (and model errors in each reach) are uncoupled. Using ASPIRE in conjunction with the Kalman filter did not improve forecast accuracy relative to a deterministic updating procedure. Theoretical analysis suggests that this is due to a large process noise to measurement noise ratio.

  10. Absolute frequency of cesium 6S-8S 822 nm two-photon transition by a high-resolution scheme.

    PubMed

    Wu, Chien-Ming; Liu, Tze-Wei; Wu, Ming-Hsuan; Lee, Ray-Kuang; Cheng, Wang-Yau

    2013-08-15

    We present an alternative scheme for determining the frequencies of cesium (Cs) atom 6S-8S Doppler-free transitions. With the use of a single electro-optical crystal, we simultaneously narrow the laser linewidth, lock the laser frequency, and resolve a narrow spectrum point by point. The error budget for this scheme is presented, and we prove that the transition frequency obtained from the Cs cell at room temperature and with one-layer μ-metal shielding is already very near that for the condition of zero collision and zero magnetic field. We point out that a sophisticated linewidth measurement could be a good guidance for choosing a suitable Cs cell for better frequency accuracy.

  11. Randomised controlled trial comparing effectiveness and acceptability of an early discharge, hospital at home scheme with acute hospital care

    PubMed Central

    Richards, Suzanne H; Coast, Joanna; Gunnell, David J; Peters, Tim J; Pounsford, John; Darlow, Mary-Anne

    1998-01-01

    Objective: To compare effectiveness and acceptability of early discharge to a hospital at home scheme with that of routine discharge from acute hospital. Design: Pragmatic randomised controlled trial. Setting: Acute hospital wards and community in north of Bristol, with a catchment population of about 224 000 people. Subjects: 241 hospitalised but medically stable elderly patients who fulfilled criteria for early discharge to hospital at home scheme and who consented to participate. Interventions: Patients’ received hospital at home care or routine hospital care. Main outcome measures: Patients’ quality of life, satisfaction, and physical functioning assessed at 4 weeks and 3 months after randomisation to treatment; length of stay in hospital and in hospital at home scheme after randomisation; mortality at 3 months. Results: There were no significant differences in patient mortality, quality of life, and physical functioning between the two arms of the trial at 4 weeks or 3 months. Only one of 11 measures of patient satisfaction was significantly different: hospital at home patients perceived higher levels of involvement in decisions. Length of stay for those receiving routine hospital care was 62% (95% confidence interval 51% to 75%) of length of stay in hospital at home scheme. Conclusions: The early discharge hospital at home scheme was similar to routine hospital discharge in terms of effectiveness and acceptability. Increased length of stay associated with the scheme must be interpreted with caution because of different organisational characteristics of the services. Key messages Pressure on hospital beds, the increasing age of the population, and high costs associated with acute hospital care have fuelled the search for alternatives to inpatient hospital care There were no significant differences between early discharge to hospital at home scheme and routine hospital care in terms of patient quality of life, physical functioning, and most measures of patient satisfaction Length of stay for hospital patients was significantly shorter than that of hospital at home patients, but, owing to qualitative differences between the two interventions, this does not necessarily mean differences in effectiveness Early discharge to hospital at home provides an acceptable alternative to routine hospital care in terms of effectiveness and patient acceptability PMID:9624070

  12. The Politico-Economic Challenges of Ghana’s National Health Insurance Scheme Implementation

    PubMed Central

    Fusheini, Adam

    2016-01-01

    Background: National/social health insurance schemes have increasingly been seen in many low- and middle-income countries (LMICs) as a vehicle to universal health coverage (UHC) and a viable alternative funding mechanism for the health sector. Several countries, including Ghana, have thus introduced and implemented mandatory national health insurance schemes (NHIS) as part of reform efforts towards increasing access to health services. Ghana passed mandatory national health insurance (NHI) legislation (ACT 650) in 2003 and commenced nationwide implementation in 2004. Several peer review studies and other research reports have since assessed the performance of the scheme with positive rating while challenges also noted. This paper contributes to the literature on economic and political implementation challenges based on empirical evidence from the perspectives of the different category of actors and institutions involved in the process. Methods: Qualitative in-depth interviews were held with 33 different category of participants in four selected district mutual health insurance schemes in Southern (two) and Northern (two) Ghana. This was to ascertain their views regarding the main challenges in the implementation process. The participants were selected through purposeful sampling, stakeholder mapping, and snowballing. Data was analysed using thematic grouping procedure. Results: Participants identified political issues of over politicisation and political interference as main challenges. The main economic issues participants identified included low premiums or contributions; broad exemptions, poor gatekeeper enforcement system; and culture of curative and hospital-centric care. Conclusion: The study establishes that political and economic factors have influenced the implementation process and the degree to which the policy has been implemented as intended. Thus, we conclude that there is a synergy between implementation and politics; and achieving UHC under the NHIS requires political stewardship. Political leadership has the responsibility to build trust and confidence in the system by providing the necessary resources and backing with minimal interference in the operations. For sustainability of the scheme, authorities need to review the exemption policy, rate of contributions, especially, from informal sector employees and recruitment criteria of scheme workers, explore additional sources of funding and re-examine training needs of employees to strengthen their competences among others. PMID:27694681

  13. The Politico-Economic Challenges of Ghana's National Health Insurance Scheme Implementation.

    PubMed

    Fusheini, Adam

    2016-04-27

    National/social health insurance schemes have increasingly been seen in many low- and middle-income countries (LMICs) as a vehicle to universal health coverage (UHC) and a viable alternative funding mechanism for the health sector. Several countries, including Ghana, have thus introduced and implemented mandatory national health insurance schemes (NHIS) as part of reform efforts towards increasing access to health services. Ghana passed mandatory national health insurance (NHI) legislation (ACT 650) in 2003 and commenced nationwide implementation in 2004. Several peer review studies and other research reports have since assessed the performance of the scheme with positive rating while challenges also noted. This paper contributes to the literature on economic and political implementation challenges based on empirical evidence from the perspectives of the different category of actors and institutions involved in the process. Qualitative in-depth interviews were held with 33 different category of participants in four selected district mutual health insurance schemes in Southern (two) and Northern (two) Ghana. This was to ascertain their views regarding the main challenges in the implementation process. The participants were selected through purposeful sampling, stakeholder mapping, and snowballing. Data was analysed using thematic grouping procedure. Participants identified political issues of over politicisation and political interference as main challenges. The main economic issues participants identified included low premiums or contributions; broad exemptions, poor gatekeeper enforcement system; and culture of curative and hospital-centric care. The study establishes that political and economic factors have influenced the implementation process and the degree to which the policy has been implemented as intended. Thus, we conclude that there is a synergy between implementation and politics; and achieving UHC under the NHIS requires political stewardship. Political leadership has the responsibility to build trust and confidence in the system by providing the necessary resources and backing with minimal interference in the operations. For sustainability of the scheme, authorities need to review the exemption policy, rate of contributions, especially, from informal sector employees and recruitment criteria of scheme workers, explore additional sources of funding and re-examine training needs of employees to strengthen their competences among others. © 2016 by Kerman University of Medical Sciences

  14. A Novel Passive Tracking Scheme Exploiting Geometric and Intercept Theorems

    PubMed Central

    Zhou, Biao; Sun, Chao; Ahn, Deockhyeon; Kim, Youngok

    2018-01-01

    Passive tracking aims to track targets without assistant devices, that is, device-free targets. Passive tracking based on Radio Frequency (RF) Tomography in wireless sensor networks has recently been addressed as an emerging field. The passive tracking scheme using geometric theorems (GTs) is one of the most popular RF Tomography schemes, because the GT-based method can effectively mitigate the demand for a high density of wireless nodes. In the GT-based tracking scheme, the tracking scenario is considered as a two-dimensional geometric topology and then geometric theorems are applied to estimate crossing points (CPs) of the device-free target on line-of-sight links (LOSLs), which reveal the target’s trajectory information in a discrete form. In this paper, we review existing GT-based tracking schemes, and then propose a novel passive tracking scheme by exploiting the Intercept Theorem (IT). To create an IT-based CP estimation scheme available in the noisy non-parallel LOSL situation, we develop the equal-ratio traverse (ERT) method. Finally, we analyze properties of three GT-based tracking algorithms and the performance of these schemes is evaluated experimentally under various trajectories, node densities, and noisy topologies. Analysis of experimental results shows that tracking schemes exploiting geometric theorems can achieve remarkable positioning accuracy even under rather a low density of wireless nodes. Moreover, the proposed IT scheme can provide generally finer tracking accuracy under even lower node density and noisier topologies, in comparison to other schemes. PMID:29562621

  15. Implementation of a cryo-electron tomography tilt-scheme optimized for high resolution subtomogram averaging.

    PubMed

    Hagen, Wim J H; Wan, William; Briggs, John A G

    2017-02-01

    Cryo-electron tomography (cryoET) allows 3D structural information to be obtained from cells and other biological samples in their close-to-native state. In combination with subtomogram averaging, detailed structures of repeating features can be resolved. CryoET data is collected as a series of images of the sample from different tilt angles; this is performed by physically rotating the sample in the microscope between each image. The angles at which the images are collected, and the order in which they are collected, together are called the tilt-scheme. Here we describe a "dose-symmetric tilt-scheme" that begins at low tilt and then alternates between increasingly positive and negative tilts. This tilt-scheme maximizes the amount of high-resolution information maintained in the tomogram for subsequent subtomogram averaging, and may also be advantageous for other applications. We describe implementation of the tilt-scheme in combination with further data-collection refinements including setting thresholds on acceptable drift and improving focus accuracy. Requirements for microscope set-up are introduced, and a macro is provided which automates the application of the tilt-scheme within SerialEM. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  16. An Improved Biometrics-Based Remote User Authentication Scheme with User Anonymity

    PubMed Central

    Kumari, Saru

    2013-01-01

    The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability. PMID:24350272

  17. An improved biometrics-based remote user authentication scheme with user anonymity.

    PubMed

    Khan, Muhammad Khurram; Kumari, Saru

    2013-01-01

    The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability.

  18. A secure and efficient chaotic map-based authenticated key agreement scheme for telecare medicine information systems.

    PubMed

    Mishra, Dheerendra; Srinivas, Jangirala; Mukhopadhyay, Sourav

    2014-10-01

    Advancement in network technology provides new ways to utilize telecare medicine information systems (TMIS) for patient care. Although TMIS usually faces various attacks as the services are provided over the public network. Recently, Jiang et al. proposed a chaotic map-based remote user authentication scheme for TMIS. Their scheme has the merits of low cost and session key agreement using Chaos theory. It enhances the security of the system by resisting various attacks. In this paper, we analyze the security of Jiang et al.'s scheme and demonstrate that their scheme is vulnerable to denial of service attack. Moreover, we demonstrate flaws in password change phase of their scheme. Further, our aim is to propose a new chaos map-based anonymous user authentication scheme for TMIS to overcome the weaknesses of Jiang et al.'s scheme, while also retaining the original merits of their scheme. We also show that our scheme is secure against various known attacks including the attacks found in Jiang et al.'s scheme. The proposed scheme is comparable in terms of the communication and computational overheads with Jiang et al.'s scheme and other related existing schemes. Moreover, we demonstrate the validity of the proposed scheme through the BAN (Burrows, Abadi, and Needham) logic.

  19. Research to Assembly Scheme for Satellite Deck Based on Robot Flexibility Control Principle

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Hu, Ruiqin; Xiao, Zhengyi; Zhao, Jingjing; Fang, Zhikai

    2018-03-01

    Deck assembly is critical quality control point in final satellite assembly process, and cable extrusion and structure collision problems in assembly process will affect development quality and progress of satellite directly. Aimed at problems existing in deck assembly process, assembly project scheme for satellite deck based on robot flexibility control principle is proposed in this paper. Scheme is introduced firstly; secondly, key technologies on end force perception and flexible docking control in the scheme are studied; then, implementation process of assembly scheme for satellite deck is described in detail; finally, actual application case of assembly scheme is given. Result shows that compared with traditional assembly scheme, assembly scheme for satellite deck based on robot flexibility control principle has obvious advantages in work efficiency, reliability and universality aspects etc.

  20. A keyword searchable attribute-based encryption scheme with attribute update for cloud storage.

    PubMed

    Wang, Shangping; Ye, Jian; Zhang, Yaling

    2018-01-01

    Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption.

  1. A keyword searchable attribute-based encryption scheme with attribute update for cloud storage

    PubMed Central

    Wang, Shangping; Zhang, Yaling

    2018-01-01

    Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption. PMID:29795577

  2. Exponential Arithmetic Based Self-Healing Group Key Distribution Scheme with Backward Secrecy under the Resource-Constrained Wireless Networks

    PubMed Central

    Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun

    2016-01-01

    In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550

  3. Knowledge-based grouping of modeled HLA peptide complexes.

    PubMed

    Kangueane, P; Sakharkar, M K; Lim, K S; Hao, H; Lin, K; Chee, R E; Kolatkar, P R

    2000-05-01

    Human leukocyte antigens are the most polymorphic of human genes and multiple sequence alignment shows that such polymorphisms are clustered in the functional peptide binding domains. Because of such polymorphism among the peptide binding residues, the prediction of peptides that bind to specific HLA molecules is very difficult. In recent years two different types of computer based prediction methods have been developed and both the methods have their own advantages and disadvantages. The nonavailability of allele specific binding data restricts the use of knowledge-based prediction methods for a wide range of HLA alleles. Alternatively, the modeling scheme appears to be a promising predictive tool for the selection of peptides that bind to specific HLA molecules. The scoring of the modeled HLA-peptide complexes is a major concern. The use of knowledge based rules (van der Waals clashes and solvent exposed hydrophobic residues) to distinguish binders from nonbinders is applied in the present study. The rules based on (1) number of observed atomic clashes between the modeled peptide and the HLA structure, and (2) number of solvent exposed hydrophobic residues on the modeled peptide effectively discriminate experimentally known binders from poor/nonbinders. Solved crystal complexes show no vdW Clash (vdWC) in 95% cases and no solvent exposed hydrophobic peptide residues (SEHPR) were seen in 86% cases. In our attempt to compare experimental binding data with the predicted scores by this scoring scheme, 77% of the peptides are correctly grouped as good binders with a sensitivity of 71%.

  4. PAH concentrations simulated with the AURAMS-PAH chemical transport model over Canada and the USA

    NASA Astrophysics Data System (ADS)

    Galarneau, E.; Makar, P. A.; Zheng, Q.; Narayan, J.; Zhang, J.; Moran, M. D.; Bari, M. A.; Pathela, S.; Chen, A.; Chlumsky, R.

    2014-04-01

    The offline Eulerian AURAMS (A Unified Regional Air quality Modelling System) chemical transport model was adapted to simulate airborne concentrations of seven PAHs (polycyclic aromatic hydrocarbons): phenanthrene, anthracene, fluoranthene, pyrene, benz[a]anthracene, chrysene + triphenylene, and benzo[a]pyrene. The model was then run for the year 2002 with hourly output on a grid covering southern Canada and the continental USA with 42 km horizontal grid spacing. Model predictions were compared to ~5000 24 h-average PAH measurements from 45 sites, most of which were located in urban or industrial areas. Eight of the measurement sites also provided data on particle/gas partitioning which had been modelled using two alternative schemes. This is the first known regional modelling study for PAHs over a North American domain and the first modelling study at any scale to compare alternative particle/gas partitioning schemes against paired field measurements. The goal of the study was to provide output concentration maps of use to assessing human inhalation exposure to PAHs in ambient air. Annual average modelled total (gas + particle) concentrations were statistically indistinguishable from measured values for fluoranthene, pyrene and benz[a]anthracene whereas the model underestimated concentrations of phenanthrene, anthracene and chrysene + triphenylene. Significance for benzo[a]pyrene performance was close to the statistical threshold and depended on the particle/gas partitioning scheme employed. On a day-to-day basis, the model simulated total PAH concentrations to the correct order of magnitude the majority of the time. The model showed seasonal differences in prediction quality for volatile species which suggests that a missing emission source such as air-surface exchange should be included in future versions. Model performance differed substantially between measurement locations and the limited available evidence suggests that the model's spatial resolution was too coarse to capture the distribution of concentrations in densely populated areas. A more detailed analysis of the factors influencing modelled particle/gas partitioning is warranted based on the findings in this study.

  5. Laser based bi-directional Gbit ground links with the Tesat transportable adaptive optical ground station

    NASA Astrophysics Data System (ADS)

    Heine, Frank; Saucke, Karen; Troendle, Daniel; Motzigemba, Matthias; Bischl, Hermann; Elser, Dominique; Marquardt, Christoph; Henninger, Hennes; Meyer, Rolf; Richter, Ines; Sodnik, Zoran

    2017-02-01

    Optical ground stations can be an alternative to radio frequency based transmit (forward) and receive (return) systems for data relay services and other applications including direct to earth optical communications from low earth orbit spacecrafts, deep space receivers, space based quantum key distribution systems and Tbps capacity feeder links to geostationary spacecrafts. The Tesat Transportable Adaptive Optical Ground Station is operational since September 2015 at the European Space Agency site in Tenerife, Spain.. This paper reports about the results of the 2016 experimental campaigns including the characterization of the optical channel from Tenerife for an optimized coding scheme, the performance of the T-AOGS under different atmospheric conditions and the first successful measurements of the suitability of the Alphasat LCT optical downlink performance for future continuous variable quantum key distribution systems.

  6. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  7. A Reconfigurable Readout Integrated Circuit for Heterogeneous Display-Based Multi-Sensor Systems

    PubMed Central

    Park, Kyeonghwan; Kim, Seung Mok; Eom, Won-Jin; Kim, Jae Joon

    2017-01-01

    This paper presents a reconfigurable multi-sensor interface and its readout integrated circuit (ROIC) for display-based multi-sensor systems, which builds up multi-sensor functions by utilizing touch screen panels. In addition to inherent touch detection, physiological and environmental sensor interfaces are incorporated. The reconfigurable feature is effectively implemented by proposing two basis readout topologies of amplifier-based and oscillator-based circuits. For noise-immune design against various noises from inherent human-touch operations, an alternate-sampling error-correction scheme is proposed and integrated inside the ROIC, achieving a 12-bit resolution of successive approximation register (SAR) of analog-to-digital conversion without additional calibrations. A ROIC prototype that includes the whole proposed functions and data converters was fabricated in a 0.18 μm complementary metal oxide semiconductor (CMOS) process, and its feasibility was experimentally verified to support multiple heterogeneous sensing functions of touch, electrocardiogram, body impedance, and environmental sensors. PMID:28368355

  8. A Reconfigurable Readout Integrated Circuit for Heterogeneous Display-Based Multi-Sensor Systems.

    PubMed

    Park, Kyeonghwan; Kim, Seung Mok; Eom, Won-Jin; Kim, Jae Joon

    2017-04-03

    This paper presents a reconfigurable multi-sensor interface and its readout integrated circuit (ROIC) for display-based multi-sensor systems, which builds up multi-sensor functions by utilizing touch screen panels. In addition to inherent touch detection, physiological and environmental sensor interfaces are incorporated. The reconfigurable feature is effectively implemented by proposing two basis readout topologies of amplifier-based and oscillator-based circuits. For noise-immune design against various noises from inherent human-touch operations, an alternate-sampling error-correction scheme is proposed and integrated inside the ROIC, achieving a 12-bit resolution of successive approximation register (SAR) of analog-to-digital conversion without additional calibrations. A ROIC prototype that includes the whole proposed functions and data converters was fabricated in a 0.18 μm complementary metal oxide semiconductor (CMOS) process, and its feasibility was experimentally verified to support multiple heterogeneous sensing functions of touch, electrocardiogram, body impedance, and environmental sensors.

  9. Batch Mode Reinforcement Learning based on the Synthesis of Artificial Trajectories

    PubMed Central

    Fonteneau, Raphael; Murphy, Susan A.; Wehenkel, Louis; Ernst, Damien

    2013-01-01

    In this paper, we consider the batch mode reinforcement learning setting, where the central problem is to learn from a sample of trajectories a policy that satisfies or optimizes a performance criterion. We focus on the continuous state space case for which usual resolution schemes rely on function approximators either to represent the underlying control problem or to represent its value function. As an alternative to the use of function approximators, we rely on the synthesis of “artificial trajectories” from the given sample of trajectories, and show that this idea opens new avenues for designing and analyzing algorithms for batch mode reinforcement learning. PMID:24049244

  10. Improvement of photon correlation spectroscopy method for measuring nanoparticle size by using attenuated total reflectance.

    PubMed

    Krishtop, Victor; Doronin, Ivan; Okishev, Konstantin

    2012-11-05

    Photon correlation spectroscopy is an effective method for measuring nanoparticle sizes and has several advantages over alternative methods. However, this method suffers from a disadvantage in that its measuring accuracy reduces in the presence of convective flows of fluid containing nanoparticles. In this paper, we propose a scheme based on attenuated total reflectance in order to reduce the influence of convection currents. The autocorrelation function for the light-scattering intensity was found for this case, and it was shown that this method afforded a significant decrease in the time required to measure the particle sizes and an increase in the measuring accuracy.

  11. An Efficient Method to Design Premature End-of-Life Trajectories: A Hypothetical Alternate Fate for Cassini

    NASA Technical Reports Server (NTRS)

    Vaquero, Mar; Senent, Juan

    2015-01-01

    What would happen if, hypothetically, the highly successful Cassini mission were to end prematurely due to lack of propellant or sudden subsystem failure? A solid plan to quickly produce a solution for any given scenario, regardless of where the spacecraft is along its reference path, must be in place to safely dispose of the spacecraft and meet all planetary protection requirements. As a contingency plan for this hypothetical situation, a method to design viable high-fidelity terminating trajectories based on a hybrid approach that exploits two-body and three-body flyby transfers combined with a numerical optimization scheme is detailed in this paper.

  12. Differential identification of Candida species and other yeasts by analysis of (/sup 35/S)methionine-labeled polypeptide profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, H.D.; Choo, K.B.; Tsai, W.C.

    1988-12-01

    This paper describes a scheme for differential identification of Candida species and other yeasts based on autoradiographic analysis of protein profiles of (/sup 35/S)methionine-labeled cellular proteins separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis. Using ATCC strains as references, protein profile analysis showed that different Candida and other yeast species produced distinctively different patterns. Good agreement in results obtained with this approach and with other conventional systems was observed. Being accurate and reproducible, this approach provides a basis for the development of an alternative method for the identification of yeasts isolated from clinical specimens.

  13. Securing Wireless Communications of the Internet of Things from the Physical Layer, An Overview

    NASA Astrophysics Data System (ADS)

    Zhang, Junqing; Duong, Trung; Woods, Roger; Marshall, Alan

    2017-08-01

    The security of the Internet of Things (IoT) is receiving considerable interest as the low power constraints and complexity features of many IoT devices are limiting the use of conventional cryptographic techniques. This article provides an overview of recent research efforts on alternative approaches for securing IoT wireless communications at the physical layer, specifically the key topics of key generation and physical layer encryption. These schemes can be implemented and are lightweight, and thus offer practical solutions for providing effective IoT wireless security. Future research to make IoT-based physical layer security more robust and pervasive is also covered.

  14. Multichannel blind deconvolution of spatially misaligned images.

    PubMed

    Sroubek, Filip; Flusser, Jan

    2005-07-01

    Existing multichannel blind restoration techniques assume perfect spatial alignment of channels, correct estimation of blur size, and are prone to noise. We developed an alternating minimization scheme based on a maximum a posteriori estimation with a priori distribution of blurs derived from the multichannel framework and a priori distribution of original images defined by the variational integral. This stochastic approach enables us to recover the blurs and the original image from channels severely corrupted by noise. We observe that the exact knowledge of the blur size is not necessary, and we prove that translation misregistration up to a certain extent can be automatically removed in the restoration process.

  15. The effects of new pricing and copayment schemes for pharmaceuticals in South Korea.

    PubMed

    Lee, Iyn-Hyang; Bloor, Karen; Hewitt, Catherine; Maynard, Alan

    2012-01-01

    This study examined the effect of new Korean pricing and copayment schemes for pharmaceuticals (1) on per patient drug expenditure, utilisation and unit prices of overall pharmaceuticals; (2) on the utilisation of essential medications and (3) on the utilisation of less costly alternatives to the study medication. Interrupted time series analysis using retrospective observational data. The increasing trend of per patient drug expenditure fell gradually after the introduction of a new copayment scheme. The segmented regression model suggested that per patient drug expenditure might decrease by about 12% 1 year after the copayment increase, compared with the absence of such a policy, with few changes in overall utilisation and unit prices. The level of savings was much smaller when the new price scheme was included, while the effects of a price cut were inconclusive due to the short time period before an additional policy change. Based on the segmented regression models, we estimate that the number of patients filling their antihyperlipidemics prescriptions decreased by 18% in the corresponding period. Those prescribed generic and brand-named antihyperlipidemics declined by around 16 and 19%, respectively, indicating little evidence of generic substitution resulting from the copayment increase. Few changes were found in the use of antihypertensives. The policies under consideration appear to contain costs not by the intended mechanisms, such as substituting generics for brand name products, but by reducing patients' access to costly therapies regardless of clinical necessity. Thus, concerns were raised about potentially compromising overall health and loss of equity in pharmaceutical utilisation. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  16. VLC-based indoor location awareness using LED light and image sensors

    NASA Astrophysics Data System (ADS)

    Lee, Seok-Ju; Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    Recently, indoor LED lighting can be considered for constructing green infra with energy saving and additionally providing LED-IT convergence services such as visible light communication (VLC) based location awareness and navigation services. For example, in case of large complex shopping mall, location awareness to navigate the destination is very important issue. However, the conventional navigation using GPS is not working indoors. Alternative location service based on WLAN has a problem that the position accuracy is low. For example, it is difficult to estimate the height exactly. If the position error of the height is greater than the height between floors, it may cause big problem. Therefore, conventional navigation is inappropriate for indoor navigation. Alternative possible solution for indoor navigation is VLC based location awareness scheme. Because indoor LED infra will be definitely equipped for providing lighting functionality, indoor LED lighting has a possibility to provide relatively high accuracy of position estimation combined with VLC technology. In this paper, we provide a new VLC based positioning system using visible LED lights and image sensors. Our system uses location of image sensor lens and location of reception plane. By using more than two image sensor, we can determine transmitter position less than 1m position error. Through simulation, we verify the validity of the proposed VLC based new positioning system using visible LED light and image sensors.

  17. Ecosystem services as a common language for coastal ecosystem-based management.

    PubMed

    Granek, Elise F; Polasky, Stephen; Kappel, Carrie V; Reed, Denise J; Stoms, David M; Koch, Evamaria W; Kennedy, Chris J; Cramer, Lori A; Hacker, Sally D; Barbier, Edward B; Aswani, Shankar; Ruckelshaus, Mary; Perillo, Gerardo M E; Silliman, Brian R; Muthiga, Nyawira; Bael, David; Wolanski, Eric

    2010-02-01

    Ecosystem-based management is logistically and politically challenging because ecosystems are inherently complex and management decisions affect a multitude of groups. Coastal ecosystems, which lie at the interface between marine and terrestrial ecosystems and provide an array of ecosystem services to different groups, aptly illustrate these challenges. Successful ecosystem-based management of coastal ecosystems requires incorporating scientific information and the knowledge and views of interested parties into the decision-making process. Estimating the provision of ecosystem services under alternative management schemes offers a systematic way to incorporate biogeophysical and socioeconomic information and the views of individuals and groups in the policy and management process. Employing ecosystem services as a common language to improve the process of ecosystem-based management presents both benefits and difficulties. Benefits include a transparent method for assessing trade-offs associated with management alternatives, a common set of facts and common currency on which to base negotiations, and improved communication among groups with competing interests or differing worldviews. Yet challenges to this approach remain, including predicting how human interventions will affect ecosystems, how such changes will affect the provision of ecosystem services, and how changes in service provision will affect the welfare of different groups in society. In a case study from Puget Sound, Washington, we illustrate the potential of applying ecosystem services as a common language for ecosystem-based management.

  18. ILUBCG2-11: Solution of 11-banded nonsymmetric linear equation systems by a preconditioned biconjugate gradient routine

    NASA Astrophysics Data System (ADS)

    Chen, Y.-M.; Koniges, A. E.; Anderson, D. V.

    1989-10-01

    The biconjugate gradient method (BCG) provides an attractive alternative to the usual conjugate gradient algorithms for the solution of sparse systems of linear equations with nonsymmetric and indefinite matrix operators. A preconditioned algorithm is given, whose form resembles the incomplete L-U conjugate gradient scheme (ILUCG2) previously presented. Although the BCG scheme requires the storage of two additional vectors, it converges in a significantly lesser number of iterations (often half), while the number of calculations per iteration remains essentially the same.

  19. News

    NASA Astrophysics Data System (ADS)

    2004-09-01

    Meeting: Brecon hosts 'alternative-style' Education Group Conference Meeting: Schools' Physics Group meeting delivers valuable teaching update Saturn Mission: PPARC’s Saturn school resource goes online Funding: Grant scheme supports Einstein Year activities Meeting: Liverpool Teachers’ Conference revives enthusiasm for physics Loan Scheme: Moon samples loaned to schools Awards: Schoolnet rewards good use of ICT in learning Funding: PPARC provides cash for science projects Workshop: Experts in physics education research share knowledge at international event Bulgaria: Transit of Venus comes to town Conference: CERN weekend provides lessons in particle physics Summer School: Teachers receive the summer-school treatment

  20. Air cooling of disk of a solid integrally cast turbine rotor for an automotive gas turbine

    NASA Technical Reports Server (NTRS)

    Gladden, H. J.

    1977-01-01

    A thermal analysis is made of surface cooling of a solid, integrally cast turbine rotor disk for an automotive gas turbine engine. Air purge and impingement cooling schemes are considered and compared with an uncooled reference case. Substantial reductions in blade temperature are predicted with each of the cooling schemes studied. It is shown that air cooling can result in a substantial gain in the stress-rupture life of the blade. Alternatively, increases in the turbine inlet temperature are possible.

  1. Conditional equivalence testing: An alternative remedy for publication bias

    PubMed Central

    Gustafson, Paul

    2018-01-01

    We introduce a publication policy that incorporates “conditional equivalence testing” (CET), a two-stage testing scheme in which standard NHST is followed conditionally by testing for equivalence. The idea of CET is carefully considered as it has the potential to address recent concerns about reproducibility and the limited publication of null results. In this paper we detail the implementation of CET, investigate similarities with a Bayesian testing scheme, and outline the basis for how a scientific journal could proceed to reduce publication bias while remaining relevant. PMID:29652891

  2. Challenges of constructing salt cavern gas storage in China

    NASA Astrophysics Data System (ADS)

    Xia, Yan; Yuan, Guangjie; Ban, Fansheng; Zhuang, Xiaoqian; Li, Jingcui

    2017-11-01

    After more than ten years of research and engineering practice in salt cavern gas storage, the engineering technology of geology, drilling, leaching, completion, operation and monitoring system has been established. With the rapid growth of domestic consumption of natural gas, the requirement of underground gas storage is increasing. Because high-quality rock salt resources about 1000m depth are relatively scarce, the salt cavern gas storages will be built in deep rock salt. According to the current domestic conventional construction technical scheme, construction in deep salt formations will face many problems such as circulating pressure increasing, tubing blockage, deformation failure, higher completion risk and so on, caused by depth and the complex geological conditions. Considering these difficulties, the differences between current technical scheme and the construction scheme of twin well and big hole are analyzed, and the results show that the technical scheme of twin well and big hole have obvious advantages in reducing the circulating pressure loss, tubing blockage and failure risk, and they can be the alternative schemes to solve the technical difficulties of constructing salt cavern gas storages in the deep rock salt.

  3. Achieving universal health care coverage: Current debates in Ghana on covering those outside the formal sector

    PubMed Central

    2012-01-01

    Background Globally, extending financial protection and equitable access to health services to those outside the formal sector employment is a major challenge for achieving universal coverage. While some favour contributory schemes, others have embraced tax-funded health service cover for those outside the formal sector. This paper critically examines the issue of how to cover those outside the formal sector through the lens of stakeholder views on the proposed one-time premium payment (OTPP) policy in Ghana. Discussion Ghana in 2004 implemented a National Health Insurance Scheme, based on a contributory model where service benefits are restricted to those who contribute (with some groups exempted from contributing), as the policy direction for moving towards universal coverage. In 2008, the OTPP system was proposed as an alternative way of ensuring coverage for those outside formal sector employment. There are divergent stakeholder views with regard to the meaning of the one-time premium and how it will be financed and sustained. Our stakeholder interviews indicate that the underlying issue being debated is whether the current contributory NHIS model for those outside the formal employment sector should be maintained or whether services for this group should be tax funded. However, the advantages and disadvantages of these alternatives are not being explored in an explicit or systematic way and are obscured by the considerable confusion about the likely design of the OTPP policy. We attempt to contribute to the broader debate about how best to fund coverage for those outside the formal sector by unpacking some of these issues and pointing to the empirical evidence needed to shed even further light on appropriate funding mechanisms for universal health systems. Summary The Ghanaian debate on OTPP is related to one of the most important challenges facing low- and middle-income countries seeking to achieve a universal health care system. It is critical that there is more extensive debate on the advantages and disadvantages of alternative funding mechanisms, supported by a solid evidence base, and with the policy objective of universal coverage providing the guiding light. PMID:23102454

  4. Galerkin finite element scheme for magnetostrictive structures and composites

    NASA Astrophysics Data System (ADS)

    Kannan, Kidambi Srinivasan

    The ever increasing-role of magnetostrictives in actuation and sensing applications is an indication of their importance in the emerging field of smart structures technology. As newer, and more complex, applications are developed, there is a growing need for a reliable computational tool that can effectively address the magneto-mechanical interactions and other nonlinearities in these materials and in structures incorporating them. This thesis presents a continuum level quasi-static, three-dimensional finite element computational scheme for modeling the nonlinear behavior of bulk magnetostrictive materials and particulate magnetostrictive composites. Models for magnetostriction must deal with two sources of nonlinearities-nonlinear body forces/moments in equilibrium equations governing magneto-mechanical interactions in deformable and magnetized bodies; and nonlinear coupled magneto-mechanical constitutive models for the material of interest. In the present work, classical differential formulations for nonlinear magneto-mechanical interactions are recast in integral form using the weighted-residual method. A discretized finite element form is obtained by applying the Galerkin technique. The finite element formulation is based upon three dimensional eight-noded (isoparametric) brick element interpolation functions and magnetostatic infinite elements at the boundary. Two alternative possibilities are explored for establishing the nonlinear incremental constitutive model-characterization in terms of magnetic field or in terms of magnetization. The former methodology is the one most commonly used in the literature. In this work, a detailed comparative study of both methodologies is carried out. The computational scheme is validated, qualitatively and quantitatively, against experimental measurements published in the literature on structures incorporating the magnetostrictive material Terfenol-D. The influence of nonlinear body forces and body moments of magnetic origin, on the response of magnetostrictive structures to complex mechanical and magnetic loading conditions, is carefully examined. While monolithic magnetostrictive materials have been commercially-available since the late eighties, attention in the smart structures research community has recently focussed upon building and using magnetostrictive particulate composite structures for conventional actuation applications and novel sensing methodologies in structural health monitoring. A particulate magnetostrictive composite element has been developed in the present work to model such structures. This composite element incorporates interactions between magnetostrictive particles by combining a numerical micromechanical analysis based on magneto-mechanical Green's functions, with a homogenization scheme based upon the Mori-Tanaka approach. This element has been applied to the simulation of particulate actuators and sensors reported in the literature. Simulation results are compared to experimental data for validation purposes. The computational schemes developed, for bulk materials and for composites, are expected to be of great value to researchers and designers of novel applications based on magnetostrictives.

  5. Multicellular Computing Using Conjugation for Wiring

    PubMed Central

    Goñi-Moreno, Angel; Amos, Martyn; de la Cruz, Fernando

    2013-01-01

    Recent efforts in synthetic biology have focussed on the implementation of logical functions within living cells. One aim is to facilitate both internal “re-programming” and external control of cells, with potential applications in a wide range of domains. However, fundamental limitations on the degree to which single cells may be re-engineered have led to a growth of interest in multicellular systems, in which a “computation” is distributed over a number of different cell types, in a manner analogous to modern computer networks. Within this model, individual cell type perform specific sub-tasks, the results of which are then communicated to other cell types for further processing. The manner in which outputs are communicated is therefore of great significance to the overall success of such a scheme. Previous experiments in distributed cellular computation have used global communication schemes, such as quorum sensing (QS), to implement the “wiring” between cell types. While useful, this method lacks specificity, and limits the amount of information that may be transferred at any one time. We propose an alternative scheme, based on specific cell-cell conjugation. This mechanism allows for the direct transfer of genetic information between bacteria, via circular DNA strands known as plasmids. We design a multi-cellular population that is able to compute, in a distributed fashion, a Boolean XOR function. Through this, we describe a general scheme for distributed logic that works by mixing different strains in a single population; this constitutes an important advantage of our novel approach. Importantly, the amount of genetic information exchanged through conjugation is significantly higher than the amount possible through QS-based communication. We provide full computational modelling and simulation results, using deterministic, stochastic and spatially-explicit methods. These simulations explore the behaviour of one possible conjugation-wired cellular computing system under different conditions, and provide baseline information for future laboratory implementations. PMID:23840385

  6. Driving a car with custom-designed fuzzy inferencing VLSI chips and boards

    NASA Technical Reports Server (NTRS)

    Pin, Francois G.; Watanabe, Yutaka

    1993-01-01

    Vehicle control in a-priori unknown, unpredictable, and dynamic environments requires many calculational and reasoning schemes to operate on the basis of very imprecise, incomplete, or unreliable data. For such systems, in which all the uncertainties can not be engineered away, approximate reasoning may provide an alternative to the complexity and computational requirements of conventional uncertainty analysis and propagation techniques. Two types of computer boards including custom-designed VLSI chips were developed to add a fuzzy inferencing capability to real-time control systems. All inferencing rules on a chip are processed in parallel, allowing execution of the entire rule base in about 30 microseconds, and therefore, making control of 'reflex-type' of motions envisionable. The use of these boards and the approach using superposition of elemental sensor-based behaviors for the development of qualitative reasoning schemes emulating human-like navigation in a-priori unknown environments are first discussed. Then how the human-like navigation scheme implemented on one of the qualitative inferencing boards was installed on a test-bed platform to investigate two control modes for driving a car in a-priori unknown environments on the basis of sparse and imprecise sensor data is described. In the first mode, the car navigates fully autonomously, while in the second mode, the system acts as a driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right and speed up or slow down depending on the obstacles perceived by the sensors. Experiments with both modes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Simulation results as well as indoors and outdoors experiments are presented and discussed to illustrate the feasibility and robustness of autonomous navigation and/or safety enhancing driver's aid using the new fuzzy inferencing hardware system and some human-like reasoning schemes which may include as little as six elemental behaviors embodied in fourteen qualitative rules.

  7. An improved biometrics-based authentication scheme for telecare medical information systems.

    PubMed

    Guo, Dianli; Wen, Qiaoyan; Li, Wenmin; Zhang, Hua; Jin, Zhengping

    2015-03-01

    Telecare medical information system (TMIS) offers healthcare delivery services and patients can acquire their desired medical services conveniently through public networks. The protection of patients' privacy and data confidentiality are significant. Very recently, Mishra et al. proposed a biometrics-based authentication scheme for telecare medical information system. Their scheme can protect user privacy and is believed to resist a range of network attacks. In this paper, we analyze Mishra et al.'s scheme and identify that their scheme is insecure to against known session key attack and impersonation attack. Thereby, we present a modified biometrics-based authentication scheme for TMIS to eliminate the aforementioned faults. Besides, we demonstrate the completeness of the proposed scheme through BAN-logic. Compared to the related schemes, our protocol can provide stronger security and it is more practical.

  8. Simple aerosol correction technique based on the spectral relationships of the aerosol multiple-scattering reflectances for atmospheric correction over the oceans.

    PubMed

    Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram

    2016-12-26

    An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.

  9. Parallelised photoacoustic signal acquisition using a Fabry-Perot sensor and a camera-based interrogation scheme

    NASA Astrophysics Data System (ADS)

    Saeb Gilani, T.; Villringer, C.; Zhang, E.; Gundlach, H.; Buchmann, J.; Schrader, S.; Laufer, J.

    2018-02-01

    Tomographic photoacoustic (PA) images acquired using a Fabry-Perot (FP) based scanner offer high resolution and image fidelity but can result in long acquisition times due to the need for raster scanning. To reduce the acquisition times, a parallelised camera-based PA signal detection scheme is developed. The scheme is based on using a sCMOScamera and FPI sensors with high homogeneity of optical thickness. PA signals were acquired using the camera-based setup and the signal to noise ratio (SNR) was measured. A comparison of the SNR of PA signal detected using 1) a photodiode in a conventional raster scanning detection scheme and 2) a sCMOS camera in parallelised detection scheme is made. The results show that the parallelised interrogation scheme has the potential to provide high speed PA imaging.

  10. Surface reconstruction and deformation monitoring of stratospheric airship based on laser scanning technology

    NASA Astrophysics Data System (ADS)

    Guo, Kai; Xie, Yongjie; Ye, Hu; Zhang, Song; Li, Yunfei

    2018-04-01

    Due to the uncertainty of stratospheric airship's shape and the security problem caused by the uncertainty, surface reconstruction and surface deformation monitoring of airship was conducted based on laser scanning technology and a √3-subdivision scheme based on Shepard interpolation was developed. Then, comparison was conducted between our subdivision scheme and the original √3-subdivision scheme. The result shows our subdivision scheme could reduce the shrinkage of surface and the number of narrow triangles. In addition, our subdivision scheme could keep the sharp features. So, surface reconstruction and surface deformation monitoring of airship could be conducted precisely by our subdivision scheme.

  11. An Identity-Based Anti-Quantum Privacy-Preserving Blind Authentication in Wireless Sensor Networks.

    PubMed

    Zhu, Hongfei; Tan, Yu-An; Zhu, Liehuang; Wang, Xianmin; Zhang, Quanxin; Li, Yuanzhang

    2018-05-22

    With the development of wireless sensor networks, IoT devices are crucial for the Smart City; these devices change people's lives such as e-payment and e-voting systems. However, in these two systems, the state-of-art authentication protocols based on traditional number theory cannot defeat a quantum computer attack. In order to protect user privacy and guarantee trustworthy of big data, we propose a new identity-based blind signature scheme based on number theorem research unit lattice, this scheme mainly uses a rejection sampling theorem instead of constructing a trapdoor. Meanwhile, this scheme does not depend on complex public key infrastructure and can resist quantum computer attack. Then we design an e-payment protocol using the proposed scheme. Furthermore, we prove our scheme is secure in the random oracle, and satisfies confidentiality, integrity, and non-repudiation. Finally, we demonstrate that the proposed scheme outperforms the other traditional existing identity-based blind signature schemes in signing speed and verification speed, outperforms the other lattice-based blind signature in signing speed, verification speed, and signing secret key size.

  12. An Identity-Based Anti-Quantum Privacy-Preserving Blind Authentication in Wireless Sensor Networks

    PubMed Central

    Zhu, Hongfei; Tan, Yu-an; Zhu, Liehuang; Wang, Xianmin; Zhang, Quanxin; Li, Yuanzhang

    2018-01-01

    With the development of wireless sensor networks, IoT devices are crucial for the Smart City; these devices change people’s lives such as e-payment and e-voting systems. However, in these two systems, the state-of-art authentication protocols based on traditional number theory cannot defeat a quantum computer attack. In order to protect user privacy and guarantee trustworthy of big data, we propose a new identity-based blind signature scheme based on number theorem research unit lattice, this scheme mainly uses a rejection sampling theorem instead of constructing a trapdoor. Meanwhile, this scheme does not depend on complex public key infrastructure and can resist quantum computer attack. Then we design an e-payment protocol using the proposed scheme. Furthermore, we prove our scheme is secure in the random oracle, and satisfies confidentiality, integrity, and non-repudiation. Finally, we demonstrate that the proposed scheme outperforms the other traditional existing identity-based blind signature schemes in signing speed and verification speed, outperforms the other lattice-based blind signature in signing speed, verification speed, and signing secret key size. PMID:29789475

  13. Concept design of a time-of-flight spectrometer for the measurement of the energy of alpha particles.

    PubMed

    García-Toraño, E

    2018-04-01

    The knowledge of the energies of the alpha particles emitted in the radioactive decay of a nuclide is a key factor in the construction of its decay scheme. Virtually all existing data are based on a few absolute measurements made by magnetic spectrometry (MS), to which most other MS measurements are traced. An alternative solution would be the use of time-of-flight detectors. This paper discusses the main aspects to be considered in the design of such detectors, and the performances that could be reasonably expected. Based on the concepts discussed here, it is estimated that an energy resolution about 2.5keV may be attainable with a good quality source. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Tensor-product preconditioners for higher-order space-time discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Diosady, Laslo T.; Murman, Scott M.

    2017-02-01

    A space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equations. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high-order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.

  15. Tensor-Product Preconditioners for Higher-Order Space-Time Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo T.; Murman, Scott M.

    2016-01-01

    space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equat ions. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.

  16. Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, B.

    2004-01-01

    The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux derivatives and a fourth-order Runge-Kutta method are denoted.

  17. To sort or not to sort: the impact of spike-sorting on neural decoding performance.

    PubMed

    Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie

    2014-10-01

    Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.

  18. To sort or not to sort: the impact of spike-sorting on neural decoding performance

    NASA Astrophysics Data System (ADS)

    Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie

    2014-10-01

    Objective. Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. Approach. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Main results. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Significance. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.

  19. Universal quantum computation using all-optical hybrid encoding

    NASA Astrophysics Data System (ADS)

    Guo, Qi; Cheng, Liu-Yong; Wang, Hong-Fu; Zhang, Shou

    2015-04-01

    By employing displacement operations, single-photon subtractions, and weak cross-Kerr nonlinearity, we propose an alternative way of implementing several universal quantum logical gates for all-optical hybrid qubits encoded in both single-photon polarization state and coherent state. Since these schemes can be straightforwardly implemented only using local operations without teleportation procedure, therefore, less physical resources and simpler operations are required than the existing schemes. With the help of displacement operations, a large phase shift of the coherent state can be obtained via currently available tiny cross-Kerr nonlinearity. Thus, all of these schemes are nearly deterministic and feasible under current technology conditions, which makes them suitable for large-scale quantum computing. Project supported by the National Natural Science Foundation of China (Grant Nos. 61465013, 11465020, and 11264042).

  20. A licence to vape: Is it time to trial of a nicotine licensing scheme to allow Australian adults controlled access to electronic cigarettes devices and refill solutions containing nicotine?

    PubMed

    Gartner, Coral; Hall, Wayne

    2015-06-01

    Australia has some of the most restrictive laws concerning use of nicotine in e-cigarettes. The only current legal option for Australians to legally possess and use nicotine for vaping is with a medical prescription and domestic supply is limited to compounding pharmacies that prepare medicines for specific patients. An alternative regulatory option that could be implemented under current drugs and poisons regulations is a 'nicotine licensing' scheme utilising current provisions for 'dangerous poisons'. This commentary discusses how such a scheme could be used to trial access to nicotine solutions for vaping outside of a 'medicines framework' in Australia. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Error function attack of chaos synchronization based encryption schemes.

    PubMed

    Wang, Xingang; Zhan, Meng; Lai, C-H; Gang, Hu

    2004-03-01

    Different chaos synchronization based encryption schemes are reviewed and compared from the practical point of view. As an efficient cryptanalysis tool for chaos encryption, a proposal based on the error function attack is presented systematically and used to evaluate system security. We define a quantitative measure (quality factor) of the effective applicability of a chaos encryption scheme, which takes into account the security, the encryption speed, and the robustness against channel noise. A comparison is made of several encryption schemes and it is found that a scheme based on one-way coupled chaotic map lattices performs outstandingly well, as judged from quality factor. Copyright 2004 American Institute of Physics.

  2. Security analysis and enhancements of an effective biometric-based remote user authentication scheme using smart cards.

    PubMed

    An, Younghwa

    2012-01-01

    Recently, many biometrics-based user authentication schemes using smart cards have been proposed to improve the security weaknesses in user authentication system. In 2011, Das proposed an efficient biometric-based remote user authentication scheme using smart cards that can provide strong authentication and mutual authentication. In this paper, we analyze the security of Das's authentication scheme, and we have shown that Das's authentication scheme is still insecure against the various attacks. Also, we proposed the enhanced scheme to remove these security problems of Das's authentication scheme, even if the secret information stored in the smart card is revealed to an attacker. As a result of security analysis, we can see that the enhanced scheme is secure against the user impersonation attack, the server masquerading attack, the password guessing attack, and the insider attack and provides mutual authentication between the user and the server.

  3. Security Analysis and Enhancements of an Effective Biometric-Based Remote User Authentication Scheme Using Smart Cards

    PubMed Central

    An, Younghwa

    2012-01-01

    Recently, many biometrics-based user authentication schemes using smart cards have been proposed to improve the security weaknesses in user authentication system. In 2011, Das proposed an efficient biometric-based remote user authentication scheme using smart cards that can provide strong authentication and mutual authentication. In this paper, we analyze the security of Das's authentication scheme, and we have shown that Das's authentication scheme is still insecure against the various attacks. Also, we proposed the enhanced scheme to remove these security problems of Das's authentication scheme, even if the secret information stored in the smart card is revealed to an attacker. As a result of security analysis, we can see that the enhanced scheme is secure against the user impersonation attack, the server masquerading attack, the password guessing attack, and the insider attack and provides mutual authentication between the user and the server. PMID:22899887

  4. Quantum state matching of qubits via measurement-induced nonlinear transformations

    NASA Astrophysics Data System (ADS)

    Kálmán, Orsolya; Kiss, Tamás

    2018-03-01

    We consider the task of deciding whether an unknown qubit state falls in a prescribed neighborhood of a reference state. We assume that several copies of the unknown state are given and apply a unitary operation pairwise on them combined with a postselection scheme conditioned on the measurement result obtained on one of the qubits of the pair. The resulting transformation is a deterministic, nonlinear, chaotic map in the Hilbert space. We derive a class of these transformations capable of orthogonalizing nonorthogonal qubit states after a few iterations. These nonlinear maps orthogonalize states which correspond to the two different convergence regions of the nonlinear map. Based on the analysis of the border (the so-called Julia set) between the two regions of convergence, we show that it is always possible to find a map capable of deciding whether an unknown state is within a neighborhood of fixed radius around a desired quantum state. We analyze which one- and two-qubit operations would physically realize the scheme. It is possible to find a single two-qubit unitary gate for each map or, alternatively, a universal special two-qubit gate together with single-qubit gates in order to carry out the task. We note that it is enough to have a single physical realization of the required gates due to the iterative nature of the scheme.

  5. An efficient chaotic maps-based authentication and key agreement scheme using smartcards for telecare medicine information systems.

    PubMed

    Lee, Tian-Fu

    2013-12-01

    A smartcard-based authentication and key agreement scheme for telecare medicine information systems enables patients, doctors, nurses and health visitors to use smartcards for secure login to medical information systems. Authorized users can then efficiently access remote services provided by the medicine information systems through public networks. Guo and Chang recently improved the efficiency of a smartcard authentication and key agreement scheme by using chaotic maps. Later, Hao et al. reported that the scheme developed by Guo and Chang had two weaknesses: inability to provide anonymity and inefficient double secrets. Therefore, Hao et al. proposed an authentication scheme for telecare medicine information systems that solved these weaknesses and improved performance. However, a limitation in both schemes is their violation of the contributory property of key agreements. This investigation discusses these weaknesses and proposes a new smartcard-based authentication and key agreement scheme that uses chaotic maps for telecare medicine information systems. Compared to conventional schemes, the proposed scheme provides fewer weaknesses, better security, and more efficiency.

  6. Spurious sea ice formation caused by oscillatory ocean tracer advection schemes

    NASA Astrophysics Data System (ADS)

    Naughten, Kaitlin A.; Galton-Fenzi, Benjamin K.; Meissner, Katrin J.; England, Matthew H.; Brassington, Gary B.; Colberg, Frank; Hattermann, Tore; Debernard, Jens B.

    2017-08-01

    Tracer advection schemes used by ocean models are susceptible to artificial oscillations: a form of numerical error whereby the advected field alternates between overshooting and undershooting the exact solution, producing false extrema. Here we show that these oscillations have undesirable interactions with a coupled sea ice model. When oscillations cause the near-surface ocean temperature to fall below the freezing point, sea ice forms for no reason other than numerical error. This spurious sea ice formation has significant and wide-ranging impacts on Southern Ocean simulations, including the disappearance of coastal polynyas, stratification of the water column, erosion of Winter Water, and upwelling of warm Circumpolar Deep Water. This significantly limits the model's suitability for coupled ocean-ice and climate studies. Using the terrain-following-coordinate ocean model ROMS (Regional Ocean Modelling System) coupled to the sea ice model CICE (Community Ice CodE) on a circumpolar Antarctic domain, we compare the performance of three different tracer advection schemes, as well as two levels of parameterised diffusion and the addition of flux limiters to prevent numerical oscillations. The upwind third-order advection scheme performs better than the centered fourth-order and Akima fourth-order advection schemes, with far fewer incidents of spurious sea ice formation. The latter two schemes are less problematic with higher parameterised diffusion, although some supercooling artifacts persist. Spurious supercooling was eliminated by adding flux limiters to the upwind third-order scheme. We present this comparison as evidence of the problematic nature of oscillatory advection schemes in sea ice formation regions, and urge other ocean/sea-ice modellers to exercise caution when using such schemes.

  7. Spatially adapted augmentation of age-specific atlas-based segmentation using patch-based priors

    NASA Astrophysics Data System (ADS)

    Liu, Mengyuan; Seshamani, Sharmishtaa; Harrylock, Lisa; Kitsch, Averi; Miller, Steven; Chau, Van; Poskitt, Kenneth; Rousseau, Francois; Studholme, Colin

    2014-03-01

    One of the most common approaches to MRI brain tissue segmentation is to employ an atlas prior to initialize an Expectation- Maximization (EM) image labeling scheme using a statistical model of MRI intensities. This prior is commonly derived from a set of manually segmented training data from the population of interest. However, in cases where subject anatomy varies significantly from the prior anatomical average model (for example in the case where extreme developmental abnormalities or brain injuries occur), the prior tissue map does not provide adequate information about the observed MRI intensities to ensure the EM algorithm converges to an anatomically accurate labeling of the MRI. In this paper, we present a novel approach for automatic segmentation of such cases. This approach augments the atlas-based EM segmentation by exploring methods to build a hybrid tissue segmentation scheme that seeks to learn where an atlas prior fails (due to inadequate representation of anatomical variation in the statistical atlas) and utilize an alternative prior derived from a patch driven search of the atlas data. We describe a framework for incorporating this patch-based augmentation of EM (PBAEM) into a 4D age-specific atlas-based segmentation of developing brain anatomy. The proposed approach was evaluated on a set of MRI brain scans of premature neonates with ages ranging from 27.29 to 46.43 gestational weeks (GWs). Results indicated superior performance compared to the conventional atlas-based segmentation method, providing improved segmentation accuracy for gray matter, white matter, ventricles and sulcal CSF regions.

  8. An Initial Investigation of the Effects of Turbulence Models on the Convergence of the RK/Implicit Scheme

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Rossow, C.-C.

    2008-01-01

    A three-stage Runge-Kutta (RK) scheme with multigrid and an implicit preconditioner has been shown to be an effective solver for the fluid dynamic equations. This scheme has been applied to both the compressible and essentially incompressible Reynolds-averaged Navier-Stokes (RANS) equations using the algebraic turbulence model of Baldwin and Lomax (BL). In this paper we focus on the convergence of the RK/implicit scheme when the effects of turbulence are represented by either the Spalart-Allmaras model or the Wilcox k-! model, which are frequently used models in practical fluid dynamic applications. Convergence behavior of the scheme with these turbulence models and the BL model are directly compared. For this initial investigation we solve the flow equations and the partial differential equations of the turbulence models indirectly coupled. With this approach we examine the convergence behavior of each system. Both point and line symmetric Gauss-Seidel are considered for approximating the inverse of the implicit operator of the flow solver. To solve the turbulence equations we use a diagonally dominant alternating direction implicit (DDADI) scheme. Computational results are presented for three airfoil flow cases and comparisons are made with experimental data. We demonstrate that the two-dimensional RANS equations and transport-type equations for turbulence modeling can be efficiently solved with an indirectly coupled algorithm that uses the RK/implicit scheme for the flow equations.

  9. Flood Hazard Mapping by Applying Fuzzy TOPSIS Method

    NASA Astrophysics Data System (ADS)

    Han, K. Y.; Lee, J. Y.; Keum, H.; Kim, B. J.; Kim, T. H.

    2017-12-01

    There are lots of technical methods to integrate various factors for flood hazard mapping. The purpose of this study is to suggest the methodology of integrated flood hazard mapping using MCDM(Multi Criteria Decision Making). MCDM problems involve a set of alternatives that are evaluated on the basis of conflicting and incommensurate criteria. In this study, to apply MCDM to assessing flood risk, maximum flood depth, maximum velocity, and maximum travel time are considered as criterion, and each applied elements are considered as alternatives. The scheme to find the efficient alternative closest to a ideal value is appropriate way to assess flood risk of a lot of element units(alternatives) based on various flood indices. Therefore, TOPSIS which is most commonly used MCDM scheme is adopted to create flood hazard map. The indices for flood hazard mapping(maximum flood depth, maximum velocity, and maximum travel time) have uncertainty concerning simulation results due to various values according to flood scenario and topographical condition. These kind of ambiguity of indices can cause uncertainty of flood hazard map. To consider ambiguity and uncertainty of criterion, fuzzy logic is introduced which is able to handle ambiguous expression. In this paper, we made Flood Hazard Map according to levee breach overflow using the Fuzzy TOPSIS Technique. We confirmed the areas where the highest grade of hazard was recorded through the drawn-up integrated flood hazard map, and then produced flood hazard map can be compared them with those indicated in the existing flood risk maps. Also, we expect that if we can apply the flood hazard map methodology suggested in this paper even to manufacturing the current flood risk maps, we will be able to make a new flood hazard map to even consider the priorities for hazard areas, including more varied and important information than ever before. Keywords : Flood hazard map; levee break analysis; 2D analysis; MCDM; Fuzzy TOPSIS Acknowlegement This research was supported by a grant (17AWMP-B079625-04) from Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  10. Lightweight ECC based RFID authentication integrated with an ID verifier transfer protocol.

    PubMed

    He, Debiao; Kumar, Neeraj; Chilamkurti, Naveen; Lee, Jong-Hyouk

    2014-10-01

    The radio frequency identification (RFID) technology has been widely adopted and being deployed as a dominant identification technology in a health care domain such as medical information authentication, patient tracking, blood transfusion medicine, etc. With more and more stringent security and privacy requirements to RFID based authentication schemes, elliptic curve cryptography (ECC) based RFID authentication schemes have been proposed to meet the requirements. However, many recently published ECC based RFID authentication schemes have serious security weaknesses. In this paper, we propose a new ECC based RFID authentication integrated with an ID verifier transfer protocol that overcomes the weaknesses of the existing schemes. A comprehensive security analysis has been conducted to show strong security properties that are provided from the proposed authentication scheme. Moreover, the performance of the proposed authentication scheme is analyzed in terms of computational cost, communicational cost, and storage requirement.

  11. Searchable attribute-based encryption scheme with attribute revocation in cloud storage.

    PubMed

    Wang, Shangping; Zhao, Duqiao; Zhang, Yaling

    2017-01-01

    Attribute based encryption (ABE) is a good way to achieve flexible and secure access control to data, and attribute revocation is the extension of the attribute-based encryption, and the keyword search is an indispensable part for cloud storage. The combination of both has an important application in the cloud storage. In this paper, we construct a searchable attribute-based encryption scheme with attribute revocation in cloud storage, the keyword search in our scheme is attribute based with access control, when the search succeeds, the cloud server returns the corresponding cipher text to user and the user can decrypt the cipher text definitely. Besides, our scheme supports multiple keywords search, which makes the scheme more practical. Under the assumption of decisional bilinear Diffie-Hellman exponent (q-BDHE) and decisional Diffie-Hellman (DDH) in the selective security model, we prove that our scheme is secure.

  12. Simple scheme to implement decoy-state reference-frame-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Chunmei; Zhu, Jianrong; Wang, Qin

    2018-06-01

    We propose a simple scheme to implement decoy-state reference-frame-independent quantum key distribution (RFI-QKD), where signal states are prepared in Z, X, and Y bases, decoy states are prepared in X and Y bases, and vacuum states are set to no bases. Different from the original decoy-state RFI-QKD scheme whose decoy states are prepared in Z, X and Y bases, in our scheme decoy states are only prepared in X and Y bases, which avoids the redundancy of decoy states in Z basis, saves the random number consumption, simplifies the encoding device of practical RFI-QKD systems, and makes the most of the finite pulses in a short time. Numerical simulations show that, considering the finite size effect with reasonable number of pulses in practical scenarios, our simple decoy-state RFI-QKD scheme exhibits at least comparable or even better performance than that of the original decoy-state RFI-QKD scheme. Especially, in terms of the resistance to the relative rotation of reference frames, our proposed scheme behaves much better than the original scheme, which has great potential to be adopted in current QKD systems.

  13. Work, Train, Win: Work-Based Learning Design and Management for Productivity Gains. OECD Education Working Papers, No. 135

    ERIC Educational Resources Information Center

    Kis, Viktoria

    2016-01-01

    Realising the potential of work-based learning schemes as a driver of productivity requires careful design and support. The length of work-based learning schemes should be adapted to the profile of productivity gains. A scheme that is too long for a given skill set might be unattractive for learners and waste public resources, but a scheme that is…

  14. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    DOE PAGES

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-25

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less

  15. Long term care financing in four OECD countries: fiscal burden and distributive effects.

    PubMed

    Karlsson, Martin; Mayhew, Les; Rickayzen, Ben

    2007-01-01

    This paper compares long term care (LTC) systems in four OECD countries (UK, Japan, Sweden and Germany). In the UK, provision is means tested, so that out of pocket payments depend on levels of income, savings and assets. In Sweden, where the system is wholly tax-financed, provision is essentially free at the point of use. In Germany and Japan, provision is financed from recently introduced compulsory insurance schemes, although the details of how each scheme operates and the distributive consequences differ somewhat. The paper analyses the effects of importing the other three countries' systems for financing LTC into the UK, focussing on both the distributive consequences and the tax burden. It finds that the German system would not be an improvement on the current UK system, because it uses a regressive method of financing. Therefore, the discussion of possible alternatives to the present UK system could be restricted to a general tax-based system as used in Sweden or the compulsory insurance system as used in Japan. The results suggest that all three systems would imply increased taxes in the UK.

  16. Adaptive sampling of AEM transients

    NASA Astrophysics Data System (ADS)

    Di Massa, Domenico; Florio, Giovanni; Viezzoli, Andrea

    2016-02-01

    This paper focuses on the sampling of the electromagnetic transient as acquired by airborne time-domain electromagnetic (TDEM) systems. Typically, the sampling of the electromagnetic transient is done using a fixed number of gates whose width grows logarithmically (log-gating). The log-gating has two main benefits: improving the signal to noise (S/N) ratio at late times, when the electromagnetic signal has amplitudes equal or lower than the natural background noise, and ensuring a good resolution at the early times. However, as a result of fixed time gates, the conventional log-gating does not consider any geological variations in the surveyed area, nor the possibly varying characteristics of the measured signal. We show, using synthetic models, how a different, flexible sampling scheme can increase the resolution of resistivity models. We propose a new sampling method, which adapts the gating on the base of the slope variations in the electromagnetic (EM) transient. The use of such an alternative sampling scheme aims to get more accurate inverse models by extracting the geoelectrical information from the measured data in an optimal way.

  17. The Grell-Freitas Convective Parameterization: Recent Developments and Applications Within the NASA GEOS Global Model

    NASA Astrophysics Data System (ADS)

    Freitas, S.; Grell, G. A.; Molod, A.

    2017-12-01

    We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization (Grell and Freitas, 2014) is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Scale dependence for deep convection is implemented either through using the method described by Arakawa et al (2011), or through lateral spreading of the subsidence terms. Aerosol effects are included though the dependence of autoconversion and evaporation on the CCN number concentration.Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, congestus, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Also, a beta-pdf is employed now to represent the normalized mass flux profile. This opens up an additional venue to apply stochasticism in the scheme.

  18. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krogel, Jaron T.; Reboredo, Fernando A.

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less

  19. A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard

    NASA Astrophysics Data System (ADS)

    Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid

    2005-07-01

    The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.

  20. Effect of Vertical Concentration Gradient on Globally Planar Detonation with Detailed Reaction Mechanism

    NASA Astrophysics Data System (ADS)

    Song, Qingguana; Wang, Cheng; Han, Yong; Gao, Dayuan; Duan, Yingliang

    2017-06-01

    Since detonation often initiates and propagates in the non-homogeneous mixtures, investigating its behavior in non-uniform mixtures is significant not only for the industrial explosion in the leakage combustible gas, but also for the experimental investigations with a vertical concentration gradient caused by the difference in the molecular weight of gas mixture. Objective of this work is to show the detonation behavior in the mixture with different concentration gradients with detailed chemical reaction mechanism. A globally planar detonation in H2-O2 system is simulated by a high-resolution code based on the fifth-order weighted essentially non-oscillatory (WENO) scheme in spatial discretization and the third-order Additive Runge-Kutta schemes in time discretization. The different shocked combustion modes appear in the rich-fuel and poor-fuel layers due to the concentration gradient effect. Globally, for the cases with the lower gradient detonation can be sustained in a way of the alternation of the multi-heads mode and single-head mode, whereas for the cases with the higher gradient detonation propagates with a single-head mode. Institute of Chemical Materials, CAEP.

  1. Application of risk management techniques for the remediation of an old mining site in Greece.

    PubMed

    Panagopoulos, I; Karayannis, A; Adam, K; Aravossis, K

    2009-05-01

    This article summarizes the project and risk management of a remediation/reclamation project in Lavrion, Greece. In Thoricos the disposal of mining and metallurgical wastes in the past resulted in the contamination with heavy metals and acid mine drainage. The objective of this reclamation project was to transform this coastal zone from a contaminated site to an area suitable for recreation purposes. A separate risk assessment study was performed to provide the basis of determining the relevant environmental contamination and to rate the alternative remedial schemes involved. The study used both existing data available from comprehensive studies, as well as newly collected field data. For considering environmental risk, the isolation and minimization of risk option was selected, and a reclamation scheme, based on environmental criteria, was applied which was comprised of in situ neutralization, stabilization and cover of the potentially acid generating wastes and contaminated soils with a low permeability geochemical barrier. Additional measures were specifically applied in the areas where highly sulphidic wastes existed constituting active acid generation sources, which included the encapsulation of wastes in HDPE liners installed on clay layers.

  2. Diabetes mellitus in Nigeria: The past, present and future

    PubMed Central

    Ogbera, Anthonia Okeoghene; Ekpebegh, Chukwuma

    2014-01-01

    Diabetes mellitus (DM) is a diverse group of metabolic disorders that is often associated with a high disease burden in developing countries such as Nigeria. In the early nineties, not much was known about DM in Nigeria and traditionally, people related DM to “curses” or “hexes” and diagnosis was made based on blood or urinary tests for glucose. Currently, oral hypoglycaemic agents but not insulin are readily accessible and acceptable to persons with DM. The cost of diabetes care is borne in most instances by individuals and often payment is “out of pocket”-this being a sequel of a poorly functional national health insurance scheme. An insulin requiring individual on a minimum wage would spend 29% of his monthly income on insulin. Complementary and alternative medicines are widely used by persons with DM and form an integral component of DM care. Towards reducing the burden of DM in Nigeria, we suggest that there be concerted efforts by healthcare professionals and stakeholders in the health industry to put in place preventative measures, a better functioning health insurance scheme and a structured DM program. PMID:25512795

  3. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-01

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.

  4. Cryptanalysis and improvement of Yan et al.'s biometric-based authentication scheme for telecare medicine information systems.

    PubMed

    Mishra, Dheerendra; Mukhopadhyay, Sourav; Chaturvedi, Ankita; Kumari, Saru; Khan, Muhammad Khurram

    2014-06-01

    Remote user authentication is desirable for a Telecare Medicine Information System (TMIS) for the safety, security and integrity of transmitted data over the public channel. In 2013, Tan presented a biometric based remote user authentication scheme and claimed that his scheme is secure. Recently, Yan et al. demonstrated some drawbacks in Tan's scheme and proposed an improved scheme to erase the drawbacks of Tan's scheme. We analyze Yan et al.'s scheme and identify that their scheme is vulnerable to off-line password guessing attack, and does not protect anonymity. Moreover, in their scheme, login and password change phases are inefficient to identify the correctness of input where inefficiency in password change phase can cause denial of service attack. Further, we design an improved scheme for TMIS with the aim to eliminate the drawbacks of Yan et al.'s scheme.

  5. Performance of an optical identification and interrogation system

    NASA Astrophysics Data System (ADS)

    Venugopalan, A.; Ghosh, A. K.; Verma, P.; Cheng, S.

    2008-04-01

    A free space optics based identification and interrogation system has been designed. The applications of the proposed system lie primarily in areas which require a secure means of mutual identification and information exchange between optical readers and tags. Conventional RFIDs raise issues regarding security threats, electromagnetic interference and health safety. The security of RF-ID chips is low due to the wide spatial spread of radio waves. Malicious nodes can read data being transmitted on the network, if they are in the receiving range. The proposed system provides an alternative which utilizes the narrow paraxial beams of lasers and an RSA-based authentication scheme. These provide enhanced security to communication between a tag and the base station or reader. The optical reader can also perform remote identification and the tag can be read from a far off distance, given line of sight. The free space optical identification and interrogation system can be used for inventory management, security systems at airports, port security, communication with high security systems, etc. to name a few. The proposed system was implemented with low-cost, off-the-shelf components and its performance in terms of throughput and bit error rate has been measured and analyzed. The range of operation with a bit-error-rate lower than 10-9 was measured to be about 4.5 m. The security of the system is based on the strengths of the RSA encryption scheme implemented using more than 1024 bits.

  6. An Improvement of Robust and Efficient Biometrics Based Password Authentication Scheme for Telecare Medicine Information Systems Using Extended Chaotic Maps.

    PubMed

    Moon, Jongho; Choi, Younsung; Kim, Jiye; Won, Dongho

    2016-03-01

    Recently, numerous extended chaotic map-based password authentication schemes that employ smart card technology were proposed for Telecare Medical Information Systems (TMISs). In 2015, Lu et al. used Li et al.'s scheme as a basis to propose a password authentication scheme for TMISs that is based on biometrics and smart card technology and employs extended chaotic maps. Lu et al. demonstrated that Li et al.'s scheme comprises some weaknesses such as those regarding a violation of the session-key security, a vulnerability to the user impersonation attack, and a lack of local verification. In this paper, however, we show that Lu et al.'s scheme is still insecure with respect to issues such as a violation of the session-key security, and that it is vulnerable to both the outsider attack and the impersonation attack. To overcome these drawbacks, we retain the useful properties of Lu et al.'s scheme to propose a new password authentication scheme that is based on smart card technology and requires the use of chaotic maps. Then, we show that our proposed scheme is more secure and efficient and supports security properties.

  7. Entropy-based gene ranking without selection bias for the predictive classification of microarray data.

    PubMed

    Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe

    2003-11-06

    We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  8. Theoretical and experimental study on multimode optical fiber grating

    NASA Astrophysics Data System (ADS)

    Yunming, Wang; Jingcao, Dai; Mingde, Zhang; Xiaohan, Sun

    2005-06-01

    The characteristics of multimode optical fiber Bragg grating (MMFBG) are studied theoretically and experimentally. For the first time the analysis of MMFBG based on a novel quasi-three-dimensional (Q-3D) finite-difference time-domain beam propagation method (Q-FDTD-BPM) is described through separating the angle component of vector field solution from the cylindrical coordinate so that several discrete two-dimensional (2D) equations are obtained, which simplify the 3D equations. And then these equations are developed using an alternating-direction implicit method and generalized Douglas scheme, which achieves higher accuracy than the regular FD scheme. All of the 2D solutions for the field intensities are also added with different power coefficients for different angle mode order numbers to obtain 3D field distributions in MMFBG. The presented method has been demonstrated as suitable simulation tool for analyzing MMFBG. In addition, based on the hydrogen-loaded and phase mask techniques, a series of Bragg grating have been written into the silicon multimode optical fiber loaded hydrogen for a month, and the spectrums for that have been measured, which obtain good results approximate to the results in the experiment. Group delay/differentiate group delay spectrums are obtained using Agilent 81910A Photonic All-Parameter Analyzer.

  9. Multistep estimators of the between-study variance: The relationship with the Paule-Mandel estimator.

    PubMed

    van Aert, Robbie C M; Jackson, Dan

    2018-04-26

    A wide variety of estimators of the between-study variance are available in random-effects meta-analysis. Many, but not all, of these estimators are based on the method of moments. The DerSimonian-Laird estimator is widely used in applications, but the Paule-Mandel estimator is an alternative that is now recommended. Recently, DerSimonian and Kacker have developed two-step moment-based estimators of the between-study variance. We extend these two-step estimators so that multiple (more than two) steps are used. We establish the surprising result that the multistep estimator tends towards the Paule-Mandel estimator as the number of steps becomes large. Hence, the iterative scheme underlying our new multistep estimator provides a hitherto unknown relationship between two-step estimators and Paule-Mandel estimator. Our analysis suggests that two-step estimators are not necessarily distinct estimators in their own right; instead, they are quantities that are closely related to the usual iterative scheme that is used to calculate the Paule-Mandel estimate. The relationship that we establish between the multistep and Paule-Mandel estimator is another justification for the use of the latter estimator. Two-step and multistep estimators are perhaps best conceptualized as approximate Paule-Mandel estimators. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  10. Highly Parallel Alternating Directions Algorithm for Time Dependent Problems

    NASA Astrophysics Data System (ADS)

    Ganzha, M.; Georgiev, K.; Lirkov, I.; Margenov, S.; Paprzycki, M.

    2011-11-01

    In our work, we consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. For this problem, a parallel algorithm based on a novel direction splitting approach is developed. Here, the pressure equation is derived from a perturbed form of the continuity equation, in which the incompressibility constraint is penalized in a negative norm induced by the direction splitting. The scheme used in the algorithm is composed of two parts: (i) velocity prediction, and (ii) pressure correction. This is a Crank-Nicolson-type two-stage time integration scheme for two and three dimensional parabolic problems in which the second-order derivative, with respect to each space variable, is treated implicitly while the other variable is made explicit at each time sub-step. In order to achieve a good parallel performance the solution of the Poison problem for the pressure correction is replaced by solving a sequence of one-dimensional second order elliptic boundary value problems in each spatial direction. The parallel code is implemented using the standard MPI functions and tested on two modern parallel computer systems. The performed numerical tests demonstrate good level of parallel efficiency and scalability of the studied direction-splitting-based algorithm.

  11. Evaluation of new collision-pair selection models in DSMC

    NASA Astrophysics Data System (ADS)

    Akhlaghi, Hassan; Roohi, Ehsan

    2017-10-01

    The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.

  12. Novel approach for dam break flow modeling using computational intelligence

    NASA Astrophysics Data System (ADS)

    Seyedashraf, Omid; Mehrabi, Mohammad; Akhtari, Ali Akbar

    2018-04-01

    A new methodology based on the computational intelligence (CI) system is proposed and tested for modeling the classic 1D dam-break flow problem. The reason to seek for a new solution lies in the shortcomings of the existing analytical and numerical models. This includes the difficulty of using the exact solutions and the unwanted fluctuations, which arise in the numerical results. In this research, the application of the radial-basis-function (RBF) and multi-layer-perceptron (MLP) systems is detailed for the solution of twenty-nine dam-break scenarios. The models are developed using seven variables, i.e. the length of the channel, the depths of the up-and downstream sections, time, and distance as the inputs. Moreover, the depths and velocities of each computational node in the flow domain are considered as the model outputs. The models are validated against the analytical, and Lax-Wendroff and MacCormack FDM schemes. The findings indicate that the employed CI models are able to replicate the overall shape of the shock- and rarefaction-waves. Furthermore, the MLP system outperforms RBF and the tested numerical schemes. A new monolithic equation is proposed based on the best fitting model, which can be used as an efficient alternative to the existing piecewise analytic equations.

  13. A privacy preserving secure and efficient authentication scheme for telecare medical information systems.

    PubMed

    Mishra, Raghavendra; Barnwal, Amit Kumar

    2015-05-01

    The Telecare medical information system (TMIS) presents effective healthcare delivery services by employing information and communication technologies. The emerging privacy and security are always a matter of great concern in TMIS. Recently, Chen at al. presented a password based authentication schemes to address the privacy and security. Later on, it is proved insecure against various active and passive attacks. To erase the drawbacks of Chen et al.'s anonymous authentication scheme, several password based authentication schemes have been proposed using public key cryptosystem. However, most of them do not present pre-smart card authentication which leads to inefficient login and password change phases. To present an authentication scheme with pre-smart card authentication, we present an improved anonymous smart card based authentication scheme for TMIS. The proposed scheme protects user anonymity and satisfies all the desirable security attributes. Moreover, the proposed scheme presents efficient login and password change phases where incorrect input can be quickly detected and a user can freely change his password without server assistance. Moreover, we demonstrate the validity of the proposed scheme by utilizing the widely-accepted BAN (Burrows, Abadi, and Needham) logic. The proposed scheme is also comparable in terms of computational overheads with relevant schemes.

  14. On the effectiveness of a license scheme for E-waste recycling: The challenge of China and India

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shinkuma, Takayoshi, E-mail: shinkuma@kansai-u.ac.j; Managi, Shunsuke, E-mail: managi@ynu.ac.j

    2010-07-15

    It is well known that China and India have been recycling centers of WEEE, especially printed circuit boards, and that serious environmental pollution in these countries has been generated by improper recycling methods. After the governments of China and India banned improper recycling by the informal sector, improper recycling activities spread to other places. Then, these governments changed their policies to one of promoting proper recycling by introducing a scheme, under which E-waste recycling requires a license issued by the government. In this paper, the effectiveness of that license scheme is examined by means of an economic model. It canmore » be shown that the license scheme can work effectively only if disposers of E-waste have a responsibility to sell E-waste to license holders. Our results run counter to the idea that international E-waste trade should be banned and provide an alternative solution to the problem.« less

  15. A hybrid sales forecasting scheme by combining independent component analysis with K-means clustering and support vector regression.

    PubMed

    Lu, Chi-Jie; Chang, Chi-Chang

    2014-01-01

    Sales forecasting plays an important role in operating a business since it can be used to determine the required inventory level to meet consumer demand and avoid the problem of under/overstocking. Improving the accuracy of sales forecasting has become an important issue of operating a business. This study proposes a hybrid sales forecasting scheme by combining independent component analysis (ICA) with K-means clustering and support vector regression (SVR). The proposed scheme first uses the ICA to extract hidden information from the observed sales data. The extracted features are then applied to K-means algorithm for clustering the sales data into several disjoined clusters. Finally, the SVR forecasting models are applied to each group to generate final forecasting results. Experimental results from information technology (IT) product agent sales data reveal that the proposed sales forecasting scheme outperforms the three comparison models and hence provides an efficient alternative for sales forecasting.

  16. A Hybrid Sales Forecasting Scheme by Combining Independent Component Analysis with K-Means Clustering and Support Vector Regression

    PubMed Central

    2014-01-01

    Sales forecasting plays an important role in operating a business since it can be used to determine the required inventory level to meet consumer demand and avoid the problem of under/overstocking. Improving the accuracy of sales forecasting has become an important issue of operating a business. This study proposes a hybrid sales forecasting scheme by combining independent component analysis (ICA) with K-means clustering and support vector regression (SVR). The proposed scheme first uses the ICA to extract hidden information from the observed sales data. The extracted features are then applied to K-means algorithm for clustering the sales data into several disjoined clusters. Finally, the SVR forecasting models are applied to each group to generate final forecasting results. Experimental results from information technology (IT) product agent sales data reveal that the proposed sales forecasting scheme outperforms the three comparison models and hence provides an efficient alternative for sales forecasting. PMID:25045738

  17. Stamina in adults: is attachment style a factor?

    PubMed

    Pellegrini, R J; Hicks, R A; Roundtree, T; Inman, G M

    2000-10-01

    The study was designed to extend inquiry on adult attachment style to include the variable of personal stamina. The data were derived from an anonymous survey administered to 163 college students (82 women and 81 men) in introductory psychology classes. Attachment style was measured by the Close Relationship Questionnaire, based on a four-category scheme suggested by Bartholomew. Stamina was evaluated with a self-report scale developed by R. A. Hicks. The pattern of statistically significant differences (p < .001) in the relative frequency with which respondents self-identified with descriptions of each of the four attachment styles on the questionnaire diverged somewhat from those reported previously. Respondents who identified themselves as most accurately described by the questionnaire's alternative defined as characterizing secure attachment had significantly higher stamina scores than did those who self-endorsed the fearful or preoccupied alternatives in that categorical measure. No other pairwise comparisons of stamina scores were statistically significant. The results provide preliminary support for the hypothesis that secure attachment is more facilitative of personal stamina than are insecure styles. Methodological limits on inferences and corresponding alternative interpretations, the potential effectiveness of defensive suppression of the attachment system in dismissing-avoidant adults, and directions for research are discussed.

  18. SIMULATING ATMOSPHERIC EXPOSURE IN A NATIONAL RISK ASSESSMENT USING AN INNOVATIVE METEOROLOGICAL SAMPLING SCHEME

    EPA Science Inventory

    Multimedia risk assessments require the temporal integration of atmospheric concentration and deposition with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-term average a...

  19. High Performance Thin Layer Chromatography.

    ERIC Educational Resources Information Center

    Costanzo, Samuel J.

    1984-01-01

    Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)

  20. A novel color image encryption scheme using alternate chaotic mapping structure

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Zhao, Yuanyuan; Zhang, Huili; Guo, Kang

    2016-07-01

    This paper proposes an color image encryption algorithm using alternate chaotic mapping structure. Initially, we use the R, G and B components to form a matrix. Then one-dimension logistic and two-dimension logistic mapping is used to generate a chaotic matrix, then iterate two chaotic mappings alternately to permute the matrix. For every iteration, XOR operation is adopted to encrypt plain-image matrix, then make further transformation to diffuse the matrix. At last, the encrypted color image is obtained from the confused matrix. Theoretical analysis and experimental results has proved the cryptosystem is secure and practical, and it is suitable for encrypting color images.

  1. Techniques for improving transients in learning control systems

    NASA Technical Reports Server (NTRS)

    Chang, C.-K.; Longman, Richard W.; Phan, Minh

    1992-01-01

    A discrete modern control formulation is used to study the nature of the transient behavior of the learning process during repetitions. Several alternative learning control schemes are developed to improve the transient performance. These include a new method using an alternating sign on the learning gain, which is very effective in limiting peak transients and also very useful in multiple-input, multiple-output systems. Other methods include learning at an increasing number of points progressing with time, or an increasing number of points of increasing density.

  2. Phase 1 of the near term hybrid passenger vehicle development program. Appendix B: Trade-off studies, volume 1

    NASA Technical Reports Server (NTRS)

    Traversi, M.; Piccolo, R.

    1980-01-01

    Tradeoff study activities and the analysis process used are described with emphasis on (1) review of the alternatives; (2) vehicle architecture; and (3) evaluation of the propulsion system alternatives; interim results are presented for the basic hybrid vehicle characterization; vehicle scheme development; propulsion system power and transmission ratios; vehicle weight; energy consumption and emissions; performance; production costs; reliability, availability and maintainability; life cycle costs, and operational quality. The final vehicle conceptual design is examined.

  3. Crossbar H-mode drift-tube linac design with alternative phase focusing for muon linac

    NASA Astrophysics Data System (ADS)

    Otani, M.; Futatsukawa, K.; Hasegawa, K.; Kitamura, R.; Kondo, Y.; Kurennoy, S.

    2017-07-01

    We have developed a Crossbar H-mode (CH) drift-tube linac (DTL) design with an alternative phase focusing (APF) scheme for a muon linac, in order to measure the anomalous magnetic moment and electric dipole moment (EDM) of muons at the Japan Proton Accelerator Research Complex (J-PARC). The CH-DTL accelerates muons from β = v/c = 0.08 to 0.28 at an operational frequency of 324 MHz. The design and results are described in this paper.

  4. Performance Testing of Thermal Interface Filler Materials in a Bolted Aluminum Interface Under Thermal/Vacuum Conditions

    NASA Technical Reports Server (NTRS)

    Glasgow, Shaun; Kittredge, Ken

    2003-01-01

    A thermal interface material is one of the many tools that are often used as part of the thermal control scheme for space-based applications. These materials are placed between, for example, an avionics box and a cold plate, in order to improve the conduction heat transfer so that proper temperatures can be maintained. Historically at Marshall Space Flight Center, CHO-THERM@ 1671 has primarily been used for applications where an interface material was deemed necessary. However, there have been numerous alternatives come on the market in recent years. It was decided that a number of these materials should be tested against each other to see if there were better performing alternatives. The tests were done strictly to compare the thermal performance of the materials relative to each other under repeatable conditions and they do not take into consideration other design issues such as off-gassing, electrical conduction or isolation, etc. This paper details the materials tested, test apparatus, procedures, and results of these tests.

  5. An alternative resource sharing scheme for land mobile satellite services

    NASA Technical Reports Server (NTRS)

    Yan, Tsun-Yee; Sue, Miles K.

    1990-01-01

    A preliminary comparison between the two competing channelization concepts for the Land Mobile Satellite Services (LMSS), namely frequency division (FD) and code division (CD), is presented. Both random access and demand-assigned approaches are considered under these concepts. The CD concept is compared with the traditional FD concept based on the system consideration and a projected traffic model. It is shown that CD is not particularly attractive for the first generation Mobile Satellite Services because of the spectral occupancy of the network bandwidth. However, the CD concept is a viable alternative for future systems such as the personal access satellite system (PASS) in the Ka-band spectrum where spectral efficiency is not of prime concern. The effects of power robbing and voice activity factor are incorporated. It was shown that the traditional rule of thumb of dividing the number of raw channels by the voice activity factor to obtain the effective number of channels is only valid asymptotically as the aggregated traffic approaches infinity.

  6. Inverse Transformation: Unleashing Spatially Heterogeneous Dynamics with an Alternative Approach to XPCS Data Analysis.

    PubMed

    Andrews, Ross N; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-02-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.

  7. Inverse Transformation: Unleashing Spatially Heterogeneous Dynamics with an Alternative Approach to XPCS Data Analysis

    PubMed Central

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-01-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS. PMID:29875506

  8. An alternative resource sharing scheme for land mobile satellite services

    NASA Astrophysics Data System (ADS)

    Yan, Tsun-Yee; Sue, Miles K.

    A preliminary comparison between the two competing channelization concepts for the Land Mobile Satellite Services (LMSS), namely frequency division (FD) and code division (CD), is presented. Both random access and demand-assigned approaches are considered under these concepts. The CD concept is compared with the traditional FD concept based on the system consideration and a projected traffic model. It is shown that CD is not particularly attractive for the first generation Mobile Satellite Services because of the spectral occupancy of the network bandwidth. However, the CD concept is a viable alternative for future systems such as the personal access satellite system (PASS) in the Ka-band spectrum where spectral efficiency is not of prime concern. The effects of power robbing and voice activity factor are incorporated. It was shown that the traditional rule of thumb of dividing the number of raw channels by the voice activity factor to obtain the effective number of channels is only valid asymptotically as the aggregated traffic approaches infinity.

  9. Color encryption scheme based on adapted quantum logistic map

    NASA Astrophysics Data System (ADS)

    Zaghloul, Alaa; Zhang, Tiejun; Amin, Mohamed; Abd El-Latif, Ahmed A.

    2014-04-01

    This paper presents a new color image encryption scheme based on quantum chaotic system. In this scheme, a new encryption scheme is accomplished by generating an intermediate chaotic key stream with the help of quantum chaotic logistic map. Then, each pixel is encrypted by the cipher value of the previous pixel and the adapted quantum logistic map. The results show that the proposed scheme has adequate security for the confidentiality of color images.

  10. A proportional control scheme for high density force myography.

    PubMed

    Belyea, Alexander T; Englehart, Kevin B; Scheme, Erik J

    2018-08-01

    Force myography (FMG) has been shown to be a potentially higher accuracy alternative to electromyography for pattern recognition based prosthetic control. Classification accuracy, however, is just one factor that affects the usability of a control system. Others, like the ability to start and stop, to coordinate dynamic movements, and to control the velocity of the device through some proportional control scheme can be of equal importance. To impart effective fine control using FMG-based pattern recognition, it is important that a method of controlling the velocity of each motion be developed. In this work force myography data were collected from 14 able bodied participants and one amputee participant as they performed a set of wrist and hand motions. The offline proportional control performance of a standard mean signal amplitude approach and a proposed regression-based alternative was compared. The impact of providing feedback during training, as well as the use of constrained or unconstrained hand and wrist contractions, were also evaluated. It is shown that the commonly used mean of rectified channel amplitudes approach commonly employed with electromyography does not translate to force myography. The proposed class-based regression proportional control approach is shown significantly outperform this standard approach (ρ  <  0.001), yielding a R 2 correlation coefficients of 0.837 and 0.830 for constrained and unconstrained forearm contractions, respectively for able bodied participants. No significant difference (ρ  =  0.693) was found in R 2 performance when feedback was provided during training or not. The amputee subject achieved a classification accuracy of 83.4%  ±  3.47% demonstrating the ability to distinguish contractions well with FMG. In proportional control the amputee participant achieved an R 2 of of 0.375 for regression based proportional control during unconstrained contractions. This is lower than the unconstrained case for able-bodied subjects for this particular amputee, possibly due to difficultly in visualizing contraction level modulation without feedback. This may be remedied in the use of a prosthetic limb that would provide real-time feedback in the form of device speed. A novel class-specific regression-based approach is proposed for multi-class control is described and shown to provide an effective means of providing FMG-based proportional control.

  11. The role of the Therapeutic Goods Administration and the Medicine and Medical Devices Safety Authority in evaluating complementary and alternative medicines in Australia and New Zealand.

    PubMed

    Ghosh, Dilip; Skinner, Margot; Ferguson, Lynnette R

    2006-04-03

    Currently, the regulation of complementary and alternative medicines and related health claims in Australia and New Zealand is managed in a number of ways. Complementary medicines, including herbal, minerals, nutritional/dietary supplements, aromatherapy oils and homeopathic medicines are regulated under therapeutic goods/products legislation. The Therapeutic Goods Administration (TGA), a division of the Commonwealth Department of Health and Ageing is responsible for administering the provisions of the legislation in Australia. The New Zealand Medicines and Medical Devices Safety Authority (Medsafe) administers the provision of legislation in New Zealand. In December 2003 the Australian and New Zealand governments signed a Treaty to establish a single, bi-national agency to regulate therapeutic products, including medical devices prescription, over-the-counter and complementary medicines. A single agency will replace the Australian TGA and the New Zealand Medsafe. The role of the new agency will be to safeguard public health through regulation of the quality, safety and efficacy or performance of therapeutic products in both Australia and New Zealand. The major activities of the new joint Australia New Zealand therapeutic products agency are in product licensing, specifying labelling standards and setting the advertising scheme, together with determining the risk classes of medicines and creating an expanded list of ingredients permitted in Class I medicines. A new, expanded definition of complementary medicines is proposed and this definition is currently under consultation. Related Australian and New Zealand legislation is being developed to implement the joint scheme. Once this legislation is passed, the Treaty will come into force and the new joint regulatory scheme will begin. The agency is expected to commence operation no later than 1 July 2006 and will result in a single agency to regulate complementary and alternative medicines.

  12. A Hash Based Remote User Authentication and Authenticated Key Agreement Scheme for the Integrated EPR Information System.

    PubMed

    Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi; Wang, Chun-Cheng

    2015-11-01

    To protect patient privacy and ensure authorized access to remote medical services, many remote user authentication schemes for the integrated electronic patient record (EPR) information system have been proposed in the literature. In a recent paper, Das proposed a hash based remote user authentication scheme using passwords and smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various passive and active attacks. However, in this paper, we found that Das's authentication scheme is still vulnerable to modification and user duplication attacks. Thereafter we propose a secure and efficient authentication scheme for the integrated EPR information system based on lightweight hash function and bitwise exclusive-or (XOR) operations. The security proof and performance analysis show our new scheme is well-suited to adoption in remote medical healthcare services.

  13. A Practical and Secure Coercion-Resistant Scheme for Internet Voting

    NASA Astrophysics Data System (ADS)

    Araújo, Roberto; Foulle, Sébastien; Traoré, Jacques

    Juels, Catalano, and Jakobsson (JCJ) proposed at WPES 2005 the first voting scheme that considers real-world threats and that is more realistic for Internet elections. Their scheme, though, has a quadratic work factor and thereby is not efficient for large scale elections. Based on the work of JCJ, Smith proposed an efficient scheme that has a linear work factor. In this paper we first show that Smith's scheme is insecure. Then we present a new coercion-resistant election scheme with a linear work factor that overcomes the flaw of Smith's proposal. Our solution is based on the group signature scheme of Camenisch and Lysyanskaya (Crypto 2004).

  14. Selection of specific interactors from phage display library based on sea lamprey variable lymphocyte receptor sequences.

    PubMed

    Wezner-Ptasinska, Magdalena; Otlewski, Jacek

    2015-12-01

    Variable lymphocyte receptors (VLRs) are non-immunoglobulin components of adaptive immunity in jawless vertebrates. These proteins composed of leucine-rich repeat modules offer some advantages over antibodies in target binding and therefore are attractive candidates for biotechnological applications. In this paper we report the design and characterization of a phage display library based on a previously proposed dVLR scaffold containing six LRR modules [Wezner-Ptasinska et al., 2011]. Our library was designed based on a consensus approach in which the randomization scheme reflects the frequencies of amino acids naturally occurring in respective positions responsible for antigen recognition. We demonstrate general applicability of the scaffold by selecting dVLRs specific for lysozyme and S100A7 protein with KD values in the micromolar range. The dVLR library could be used as a convenient alternative to antibodies for effective isolation of high affinity binders.

  15. Pressure gradients fail to predict diffusio-osmosis

    NASA Astrophysics Data System (ADS)

    Liu, Yawei; Ganti, Raman; Frenkel, Daan

    2018-05-01

    We present numerical simulations of diffusio-osmotic flow, i.e. the fluid flow generated by a concentration gradient along a solid-fluid interface. In our study, we compare a number of distinct approaches that have been proposed for computing such flows and compare them with a reference calculation based on direct, non-equilibrium molecular dynamics simulations. As alternatives, we consider schemes that compute diffusio-osmotic flow from the gradient of the chemical potentials of the constituent species and from the gradient of the component of the pressure tensor parallel to the interface. We find that the approach based on treating chemical potential gradients as external forces acting on various species agrees with the direct simulations, thereby supporting the approach of Marbach et al (2017 J. Chem. Phys. 146 194701). In contrast, an approach based on computing the gradients of the microscopic pressure tensor does not reproduce the direct non-equilibrium results.

  16. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMES IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1) Lake Superior tributaries and 2) watersheds of riverine coastal wetlands...

  17. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMED IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1)Lake Superior tributaries and 2) watersheds of riverine coastal wetlands ...

  18. A Low-Cost Tracking System for Running Race Applications Based on Bluetooth Low Energy Technology.

    PubMed

    Perez-Diaz-de-Cerio, David; Hernández-Solana, Ángela; Valdovinos, Antonio; Valenzuela, Jose Luis

    2018-03-20

    Timing points used in running races and other competition events are generally based on radio-frequency identification (RFID) technology. Athletes' times are calculated via passive RFID tags and reader kits. Specifically, the reader infrastructure needed is complex and requires the deployment of a mat or ramps which hide the receiver antennae under them. Moreover, with the employed tags, it is not possible to transmit additional and dynamic information such as pulse or oximetry monitoring, alarms, etc. In this paper we present a system based on two low complex schemes allowed in Bluetooth Low Energy (BLE): the non-connectable undirected advertisement process and a modified version of scannable undirected advertisement process using the new capabilities present in Bluetooth 5. After fully describing the system architecture, which allows full real-time position monitoring of the runners using mobile phones on the organizer side and BLE sensors on the participants' side, we derive the mobility patterns of runners and capacity requirements, which are determinant for evaluating the performance of the proposed system. They have been obtained from the analysis of the real data measured in the last Barcelona Marathon. By means of simulations, we demonstrate that, even under disadvantageous conditions (50% error ratio), both schemes perform reliably and are able to detect the 100% of the participants in all the cases. The cell coverage of the system needs to be adjusted when non-connectable process is considered. Nevertheless, through simulation and experimental, we show that the proposed scheme based on the new events available in Bluetooth 5 is clearly the best implementation alternative for all the cases, no matter the coverage area and the runner speed. The proposal widely exceeds the detection requirements of the real scenario, surpassing the measured peaks of 20 sensors per second incoming in the coverage area, moving at speeds that range from 1.5 m/s to 6.25 m/s. The designed real test-bed shows that the scheme is able to detect 72 sensors below 600 ms, fulfilling comfortably the requirements determined for the intended application. The main disadvantage of this system would be that the sensors are active, but we have proved that its consumption can be so low (9.5 µA) that, with a typical button cell, the sensor battery life would be over 10,000 h of use.

  19. A Low-Cost Tracking System for Running Race Applications Based on Bluetooth Low Energy Technology

    PubMed Central

    2018-01-01

    Timing points used in running races and other competition events are generally based on radio-frequency identification (RFID) technology. Athletes’ times are calculated via passive RFID tags and reader kits. Specifically, the reader infrastructure needed is complex and requires the deployment of a mat or ramps which hide the receiver antennae under them. Moreover, with the employed tags, it is not possible to transmit additional and dynamic information such as pulse or oximetry monitoring, alarms, etc. In this paper we present a system based on two low complex schemes allowed in Bluetooth Low Energy (BLE): the non-connectable undirected advertisement process and a modified version of scannable undirected advertisement process using the new capabilities present in Bluetooth 5. After fully describing the system architecture, which allows full real-time position monitoring of the runners using mobile phones on the organizer side and BLE sensors on the participants’ side, we derive the mobility patterns of runners and capacity requirements, which are determinant for evaluating the performance of the proposed system. They have been obtained from the analysis of the real data measured in the last Barcelona Marathon. By means of simulations, we demonstrate that, even under disadvantageous conditions (50% error ratio), both schemes perform reliably and are able to detect the 100% of the participants in all the cases. The cell coverage of the system needs to be adjusted when non-connectable process is considered. Nevertheless, through simulation and experimental, we show that the proposed scheme based on the new events available in Bluetooth 5 is clearly the best implementation alternative for all the cases, no matter the coverage area and the runner speed. The proposal widely exceeds the detection requirements of the real scenario, surpassing the measured peaks of 20 sensors per second incoming in the coverage area, moving at speeds that range from 1.5 m/s to 6.25 m/s. The designed real test-bed shows that the scheme is able to detect 72 sensors below 600 ms, fulfilling comfortably the requirements determined for the intended application. The main disadvantage of this system would be that the sensors are active, but we have proved that its consumption can be so low (9.5 µA) that, with a typical button cell, the sensor battery life would be over 10,000 h of use. PMID:29558432

  20. Coordinating management of water, salinity and trace elements for cotton under mulched drip irrigation with brackish water

    NASA Astrophysics Data System (ADS)

    Jin, M.; Chen, W.; Liang, X.

    2016-12-01

    Rational irrigation with brackish water can increase crop production, but irrational use may cause soil salinization. In order to understand the relationships among water, salt, and nutrient (including trace elements) and find rational schemes to manage water, salinity and nutrient in cotton fields, field and pot experiments were conducted in an arid area of southern Xinjiang, northwest China. Field experiments were performed from 2008 to 2015, and involved mulched drip irrigation during the growing season and flood irrigation afterwards. The average cotton yield of seven years varied between 3,575 and 5,095 kg/ha, and the irrigation water productivity between 0.91 and 1.16 kg/m3. With the progress of brackish water irrigation, Cu, Fe, Mn, and Na showed strong aggregation in topsoil at the narrow row, whereas the contents of Ca and K decreased in the order of inter-mulch gap, the wide inter row, and the narrow row. The contents of Cu, Fe, Mn, Ca and K in root soil reduced with cotton growth, whereas Na increased. Although mulched drip irrigation during the growing season resulted in an increase in salinity in the root zone, flood irrigation after harvesting leached the accumulated salts below background levels. Based on experiments a scheme for coordinating management of soil water, salt, and nutrient is proposed, that is, under the planting pattern of one mulch, two drip lines and four rows, the alternative irrigation plus a flood irrigation after harvesting or before seeding was the ideal scheme. Numerical simulations using solute transport model coupled with the root solute uptake based on the experiments and extended by another 20 years, suggest that the mulched drip irrigation using alternatively fresh and brackish water during the growing season and flood irrigation with fresh water after harvesting, is a sustainable irrigation practice that should not lead to soil salinization. Pot experiments with trace elements and different saline water showed significantly antagonistic effects on cotton growth and yield between NaCl and Mn or Zn or B. Zn concentration in irrigation water under salinity stress affected the uptake of nutrient elements and caused the different contents of nutrient elements in cotton, and influenced cotton growth and yields.

Top