Optimizing Scheme for Remote Preparation of Four-particle Cluster-like Entangled States
NASA Astrophysics Data System (ADS)
Wang, Dong; Ye, Liu
2011-09-01
Recently, Ma et al. (Opt. Commun. 283:2640, 2010) have proposed a novel scheme for preparing a class of cluster-like entangled states based on a four-particle projective measurement. In this paper, we put forward a new and optimal scheme to realize the remote preparation for this class of cluster-like states with the aid of two bipartite partially entangled channels. Different from the previous scheme, we employ a two-particle projective measurement instead of the four-particle projective measurement during the preparation. Besides, the resource consumptions are computed in our scheme, which include classical communication cost and quantum resource consumptions. Moreover, we have some discussions on the features of our scheme and make some comparisons on resource consumptions and operation complexity between the previous scheme and ours. The results show that our scheme is more economic and feasible compared with the previous.
Moon, Jongho; Lee, Donghoon; Lee, Youngsook; Won, Dongho
2017-04-25
User authentication in wireless sensor networks is more difficult than in traditional networks owing to sensor network characteristics such as unreliable communication, limited resources, and unattended operation. For these reasons, various authentication schemes have been proposed to provide secure and efficient communication. In 2016, Park et al. proposed a secure biometric-based authentication scheme with smart card revocation/reissue for wireless sensor networks. However, we found that their scheme was still insecure against impersonation attack, and had a problem in the smart card revocation/reissue phase. In this paper, we show how an adversary can impersonate a legitimate user or sensor node, illegal smart card revocation/reissue and prove that Park et al.'s scheme fails to provide revocation/reissue. In addition, we propose an enhanced scheme that provides efficiency, as well as anonymity and security. Finally, we provide security and performance analysis between previous schemes and the proposed scheme, and provide formal analysis based on the random oracle model. The results prove that the proposed scheme can solve the weaknesses of impersonation attack and other security flaws in the security analysis section. Furthermore, performance analysis shows that the computational cost is lower than the previous scheme.
Moon, Jongho; Lee, Donghoon; Lee, Youngsook; Won, Dongho
2017-01-01
User authentication in wireless sensor networks is more difficult than in traditional networks owing to sensor network characteristics such as unreliable communication, limited resources, and unattended operation. For these reasons, various authentication schemes have been proposed to provide secure and efficient communication. In 2016, Park et al. proposed a secure biometric-based authentication scheme with smart card revocation/reissue for wireless sensor networks. However, we found that their scheme was still insecure against impersonation attack, and had a problem in the smart card revocation/reissue phase. In this paper, we show how an adversary can impersonate a legitimate user or sensor node, illegal smart card revocation/reissue and prove that Park et al.’s scheme fails to provide revocation/reissue. In addition, we propose an enhanced scheme that provides efficiency, as well as anonymity and security. Finally, we provide security and performance analysis between previous schemes and the proposed scheme, and provide formal analysis based on the random oracle model. The results prove that the proposed scheme can solve the weaknesses of impersonation attack and other security flaws in the security analysis section. Furthermore, performance analysis shows that the computational cost is lower than the previous scheme. PMID:28441331
Color encryption scheme based on adapted quantum logistic map
NASA Astrophysics Data System (ADS)
Zaghloul, Alaa; Zhang, Tiejun; Amin, Mohamed; Abd El-Latif, Ahmed A.
2014-04-01
This paper presents a new color image encryption scheme based on quantum chaotic system. In this scheme, a new encryption scheme is accomplished by generating an intermediate chaotic key stream with the help of quantum chaotic logistic map. Then, each pixel is encrypted by the cipher value of the previous pixel and the adapted quantum logistic map. The results show that the proposed scheme has adequate security for the confidentiality of color images.
NASA Astrophysics Data System (ADS)
Ji, Yang; Chen, Hong; Tang, Hongwu
2017-06-01
A highly accurate wide-angle scheme, based on the generalized mutistep scheme in the propagation direction, is developed for the finite difference beam propagation method (FD-BPM). Comparing with the previously presented method, the simulation shows that our method results in a more accurate solution, and the step size can be much larger
Regolith thermal energy storage for lunar nighttime power
NASA Technical Reports Server (NTRS)
Tillotson, Brian
1992-01-01
A scheme for providing nighttime electric power to a lunar base is described. This scheme stores thermal energy in a pile of regolith. Any such scheme must somehow improve on the poor thermal conductivity of lunar regolith in vacuum. Two previous schemes accomplish this by casting or melting the regolith. The scheme described here wraps the regolith in a gas-tight bag and introduces a light gas to enhance thermal conductivity. This allows the system to be assembled with less energy and equipment than schemes which require melting of regolith. A point design based on the new scheme is presented. Its mass from Earth compares favorably with the mass of a regenerative fuel cell of equal capacity.
Development of a new flux splitting scheme
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
The use of a new splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.
Development of a new flux splitting scheme
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
The successful use of a novel splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.
Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.
Yang, Shengxiang
2008-01-01
In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.
A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of t...
Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Xie, Dong; Yang, Yixian
2015-06-01
The Telecare Medicine Information Systems (TMISs) provide an efficient communicating platform supporting the patients access health-care delivery services via internet or mobile networks. Authentication becomes an essential need when a remote patient logins into the telecare server. Recently, many extended chaotic maps based authentication schemes using smart cards for TMISs have been proposed. Li et al. proposed a secure smart cards based authentication scheme for TMISs using extended chaotic maps based on Lee's and Jiang et al.'s scheme. In this study, we show that Li et al.'s scheme has still some weaknesses such as violation the session key security, vulnerability to user impersonation attack and lack of local verification. To conquer these flaws, we propose a chaotic maps and smart cards based password authentication scheme by applying biometrics technique and hash function operations. Through the informal and formal security analyses, we demonstrate that our scheme is resilient possible known attacks including the attacks found in Li et al.'s scheme. As compared with the previous authentication schemes, the proposed scheme is more secure and efficient and hence more practical for telemedical environments.
Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps.
Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han; Lin, Tsung-Hung
2017-01-01
A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes.
Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps
Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han
2017-01-01
A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes. PMID:28759615
A Layered Searchable Encryption Scheme with Functional Components Independent of Encryption Methods
Luo, Guangchun; Qin, Ke
2014-01-01
Searchable encryption technique enables the users to securely store and search their documents over the remote semitrusted server, which is especially suitable for protecting sensitive data in the cloud. However, various settings (based on symmetric or asymmetric encryption) and functionalities (ranked keyword query, range query, phrase query, etc.) are often realized by different methods with different searchable structures that are generally not compatible with each other, which limits the scope of application and hinders the functional extensions. We prove that asymmetric searchable structure could be converted to symmetric structure, and functions could be modeled separately apart from the core searchable structure. Based on this observation, we propose a layered searchable encryption (LSE) scheme, which provides compatibility, flexibility, and security for various settings and functionalities. In this scheme, the outputs of the core searchable component based on either symmetric or asymmetric setting are converted to some uniform mappings, which are then transmitted to loosely coupled functional components to further filter the results. In such a way, all functional components could directly support both symmetric and asymmetric settings. Based on LSE, we propose two representative and novel constructions for ranked keyword query (previously only available in symmetric scheme) and range query (previously only available in asymmetric scheme). PMID:24719565
Lee, Tian-Fu; Chang, I-Pin; Lin, Tsung-Hung; Wang, Ching-Cheng
2013-06-01
The integrated EPR information system supports convenient and rapid e-medicine services. A secure and efficient authentication scheme for the integrated EPR information system provides safeguarding patients' electronic patient records (EPRs) and helps health care workers and medical personnel to rapidly making correct clinical decisions. Recently, Wu et al. proposed an efficient password-based user authentication scheme using smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various malicious attacks. However, their scheme is still vulnerable to lost smart card and stolen verifier attacks. This investigation discusses these weaknesses and proposes a secure and efficient authentication scheme for the integrated EPR information system as alternative. Compared with related approaches, the proposed scheme not only retains a lower computational cost and does not require verifier tables for storing users' secrets, but also solves the security problems in previous schemes and withstands possible attacks.
Lou, Der-Chyuan; Lee, Tian-Fu; Lin, Tsung-Hung
2015-05-01
Authenticated key agreements for telecare medicine information systems provide patients, doctors, nurses and health visitors with accessing medical information systems and getting remote services efficiently and conveniently through an open network. In order to have higher security, many authenticated key agreement schemes appended biometric keys to realize identification except for using passwords and smartcards. Due to too many transmissions and computational costs, these authenticated key agreement schemes are inefficient in communication and computation. This investigation develops two secure and efficient authenticated key agreement schemes for telecare medicine information systems by using biometric key and extended chaotic maps. One scheme is synchronization-based, while the other nonce-based. Compared to related approaches, the proposed schemes not only retain the same security properties with previous schemes, but also provide users with privacy protection and have fewer transmissions and lower computational cost.
A multihop key agreement scheme for wireless ad hoc networks based on channel characteristics.
Hao, Zhuo; Zhong, Sheng; Yu, Nenghai
2013-01-01
A number of key agreement schemes based on wireless channel characteristics have been proposed recently. However, previous key agreement schemes require that two nodes which need to agree on a key are within the communication range of each other. Hence, they are not suitable for multihop wireless networks, in which nodes do not always have direct connections with each other. In this paper, we first propose a basic multihop key agreement scheme for wireless ad hoc networks. The proposed basic scheme is resistant to external eavesdroppers. Nevertheless, this basic scheme is not secure when there exist internal eavesdroppers or Man-in-the-Middle (MITM) adversaries. In order to cope with these adversaries, we propose an improved multihop key agreement scheme. We show that the improved scheme is secure against internal eavesdroppers and MITM adversaries in a single path. Both performance analysis and simulation results demonstrate that the improved scheme is efficient. Consequently, the improved key agreement scheme is suitable for multihop wireless ad hoc networks.
A Multihop Key Agreement Scheme for Wireless Ad Hoc Networks Based on Channel Characteristics
Yu, Nenghai
2013-01-01
A number of key agreement schemes based on wireless channel characteristics have been proposed recently. However, previous key agreement schemes require that two nodes which need to agree on a key are within the communication range of each other. Hence, they are not suitable for multihop wireless networks, in which nodes do not always have direct connections with each other. In this paper, we first propose a basic multihop key agreement scheme for wireless ad hoc networks. The proposed basic scheme is resistant to external eavesdroppers. Nevertheless, this basic scheme is not secure when there exist internal eavesdroppers or Man-in-the-Middle (MITM) adversaries. In order to cope with these adversaries, we propose an improved multihop key agreement scheme. We show that the improved scheme is secure against internal eavesdroppers and MITM adversaries in a single path. Both performance analysis and simulation results demonstrate that the improved scheme is efficient. Consequently, the improved key agreement scheme is suitable for multihop wireless ad hoc networks. PMID:23766725
Son, Seungsik; Jeong, Jongpil
2014-01-01
In this paper, a mobility-aware Dual Pointer Forwarding scheme (mDPF) is applied in Proxy Mobile IPv6 (PMIPv6) networks. The movement of a Mobile Node (MN) is classified as intra-domain and inter-domain handoff. When the MN moves, this scheme can reduce the high signaling overhead for intra-handoff/inter-handoff, because the Local Mobility Anchor (LMA) and Mobile Access Gateway (MAG) are connected by pointer chains. In other words, a handoff is aware of low mobility between the previously attached MAG (pMAG) and newly attached MAG (nMAG), and another handoff between the previously attached LMA (pLMA) and newly attached LMA (nLMA) is aware of high mobility. Based on these mobility-aware binding updates, the overhead of the packet delivery can be reduced. Also, we analyse the binding update cost and packet delivery cost for route optimization, based on the mathematical analytic model. Analytical results show that our mDPF outperforms the PMIPv6 and the other pointer forwarding schemes, in terms of reducing the total cost of signaling.
A cache-aided multiprocessor rollback recovery scheme
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent
1989-01-01
This paper demonstrates how previous uniprocessor cache-aided recovery schemes can be applied to multiprocessor architectures, for recovering from transient processor failures, utilizing private caches and a global shared memory. As with cache-aided uniprocessor recovery, the multiprocessor cache-aided recovery scheme of this paper can be easily integrated into standard bus-based snoopy cache coherence protocols. A consistent shared memory state is maintained without the necessity of global check-pointing.
Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2015-03-01
The telecare medical information systems (TMISs) enable patients to conveniently enjoy telecare services at home. The protection of patient's privacy is a key issue due to the openness of communication environment. Authentication as a typical approach is adopted to guarantee confidential and authorized interaction between the patient and remote server. In order to achieve the goals, numerous remote authentication schemes based on cryptography have been presented. Recently, Arshad et al. (J Med Syst 38(12): 2014) presented a secure and efficient three-factor authenticated key exchange scheme to remedy the weaknesses of Tan et al.'s scheme (J Med Syst 38(3): 2014). In this paper, we found that once a successful off-line password attack that results in an adversary could impersonate any user of the system in Arshad et al.'s scheme. In order to thwart these security attacks, an enhanced biometric and smart card based remote authentication scheme for TMISs is proposed. In addition, the BAN logic is applied to demonstrate the completeness of the enhanced scheme. Security and performance analyses show that our enhanced scheme satisfies more security properties and less computational cost compared with previously proposed schemes.
Reliable multicast protocol specifications flow control and NACK policy
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Whetten, Brian
1995-01-01
This appendix presents the flow and congestion control schemes recommended for RMP and a NACK policy based on the whiteboard tool. Because RMP uses a primarily NACK based error detection scheme, there is no direct feedback path through which receivers can signal losses through low buffer space or congestion. Reliable multicast protocols also suffer from the fact that throughput for a multicast group must be divided among the members of the group. This division is usually very dynamic in nature and therefore does not lend itself well to a priori determination. These facts have led the flow and congestion control schemes of RMP to be made completely orthogonal to the protocol specification. This allows several differing schemes to be used in different environments to produce the best results. As a default, a modified sliding window scheme based on previous algorithms are suggested and described below.
Two-out-of-two color matching based visual cryptography schemes.
Machizaud, Jacques; Fournel, Thierry
2012-09-24
Visual cryptography which consists in sharing a secret message between transparencies has been extended to color prints. In this paper, we propose a new visual cryptography scheme based on color matching. The stacked printed media reveal a uniformly colored message decoded by the human visual system. In contrast with the previous color visual cryptography schemes, the proposed one enables to share images without pixel expansion and to detect a forgery as the color of the message is kept secret. In order to correctly print the colors on the media and to increase the security of the scheme, we use spectral models developed for color reproduction describing printed colors from an optical point of view.
QR code based noise-free optical encryption and decryption of a gray scale image
NASA Astrophysics Data System (ADS)
Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-03-01
In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.
Use of tannin anticorrosive reaction primer to improve traditional coating systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matamala, G.; Droguett, G.; Smeltzer, W.
1994-04-01
Different anticorrosive schemes applied over plain or previously shot-blasted surfaces of AISI 1010 (UNS G10100) steel plates were compared. Plates were painted with alkydic, vinylic, and epoxy anticorrosive schemes over metal treated previously with pine tannin reaction primer and over its own schemes without previous primer treatment. Anticorrosive tests were conducted in a salt fog chamber according to ASTM B 117-73. Rusting, blistering, and adhesion were assessed over time. The survey was complemented with potentiodynamic scanning tests in sodium chloride (NaCl) solution with a concentration equivalent to seawater. Corrosion currents were determined using Tafel and polarization resistance techniques. Results showedmore » the reaction primer inhibited corrosion by improving adherence. Advantages over traditional conversion primers formulated in a base of zinc chromate in phosphoric medium were evident.« less
PHACK: An Efficient Scheme for Selective Forwarding Attack Detection in WSNs.
Liu, Anfeng; Dong, Mianxiong; Ota, Kaoru; Long, Jun
2015-12-09
In this paper, a Per-Hop Acknowledgement (PHACK)-based scheme is proposed for each packet transmission to detect selective forwarding attacks. In our scheme, the sink and each node along the forwarding path generate an acknowledgement (ACK) message for each received packet to confirm the normal packet transmission. The scheme, in which each ACK is returned to the source node along a different routing path, can significantly increase the resilience against attacks because it prevents an attacker from compromising nodes in the return routing path, which can otherwise interrupt the return of nodes' ACK packets. For this case, the PHACK scheme also has better potential to detect abnormal packet loss and identify suspect nodes as well as better resilience against attacks. Another pivotal issue is the network lifetime of the PHACK scheme, as it generates more acknowledgements than previous ACK-based schemes. We demonstrate that the network lifetime of the PHACK scheme is not lower than that of other ACK-based schemes because the scheme just increases the energy consumption in non-hotspot areas and does not increase the energy consumption in hotspot areas. Moreover, the PHACK scheme greatly simplifies the protocol and is easy to implement. Both theoretical and simulation results are given to demonstrate the effectiveness of the proposed scheme in terms of high detection probability and the ability to identify suspect nodes.
PHACK: An Efficient Scheme for Selective Forwarding Attack Detection in WSNs
Liu, Anfeng; Dong, Mianxiong; Ota, Kaoru; Long, Jun
2015-01-01
In this paper, a Per-Hop Acknowledgement (PHACK)-based scheme is proposed for each packet transmission to detect selective forwarding attacks. In our scheme, the sink and each node along the forwarding path generate an acknowledgement (ACK) message for each received packet to confirm the normal packet transmission. The scheme, in which each ACK is returned to the source node along a different routing path, can significantly increase the resilience against attacks because it prevents an attacker from compromising nodes in the return routing path, which can otherwise interrupt the return of nodes’ ACK packets. For this case, the PHACK scheme also has better potential to detect abnormal packet loss and identify suspect nodes as well as better resilience against attacks. Another pivotal issue is the network lifetime of the PHACK scheme, as it generates more acknowledgements than previous ACK-based schemes. We demonstrate that the network lifetime of the PHACK scheme is not lower than that of other ACK-based schemes because the scheme just increases the energy consumption in non-hotspot areas and does not increase the energy consumption in hotspot areas. Moreover, the PHACK scheme greatly simplifies the protocol and is easy to implement. Both theoretical and simulation results are given to demonstrate the effectiveness of the proposed scheme in terms of high detection probability and the ability to identify suspect nodes. PMID:26690178
NASA Technical Reports Server (NTRS)
Duyar, A.; Guo, T.-H.; Merrill, W.; Musgrave, J.
1992-01-01
In a previous study, Guo, Merrill and Duyar, 1990, reported a conceptual development of a fault detection and diagnosis system for actuation faults of the space shuttle main engine. This study, which is a continuation of the previous work, implements the developed fault detection and diagnosis scheme for the real time actuation fault diagnosis of the space shuttle main engine. The scheme will be used as an integral part of an intelligent control system demonstration experiment at NASA Lewis. The diagnosis system utilizes a model based method with real time identification and hypothesis testing for actuation, sensor, and performance degradation faults.
Adaptive power allocation schemes based on IAFS algorithm for OFDM-based cognitive radio systems
NASA Astrophysics Data System (ADS)
Zhang, Shuying; Zhao, Xiaohui; Liang, Cong; Ding, Xu
2017-01-01
In cognitive radio (CR) systems, reasonable power allocation can increase transmission rate of CR users or secondary users (SUs) as much as possible and at the same time insure normal communication among primary users (PUs). This study proposes an optimal power allocation scheme for the OFDM-based CR system with one SU influenced by multiple PU interference constraints. This scheme is based on an improved artificial fish swarm (IAFS) algorithm in combination with the advantage of conventional artificial fish swarm (ASF) algorithm and particle swarm optimisation (PSO) algorithm. In performance comparison of IAFS algorithm with other intelligent algorithms by simulations, the superiority of the IAFS algorithm is illustrated; this superiority results in better performance of our proposed scheme than that of the power allocation algorithms proposed by the previous studies in the same scenario. Furthermore, our proposed scheme can obtain higher transmission data rate under the multiple PU interference constraints and the total power constraint of SU than that of the other mentioned works.
NASA Astrophysics Data System (ADS)
Lin, Guofen; Hong, Hanshu; Xia, Yunhao; Sun, Zhixin
2017-10-01
Attribute-based encryption (ABE) is an interesting cryptographic technique for flexible cloud data sharing access control. However, some open challenges hinder its practical application. In previous schemes, all attributes are considered as in the same status while they are not in most of practical scenarios. Meanwhile, the size of access policy increases dramatically with the raise of its expressiveness complexity. In addition, current research hardly notices that mobile front-end devices, such as smartphones, are poor in computational performance while too much bilinear pairing computation is needed for ABE. In this paper, we propose a key-policy weighted attribute-based encryption without bilinear pairing computation (KP-WABE-WB) for secure cloud data sharing access control. A simple weighted mechanism is presented to describe different importance of each attribute. We introduce a novel construction of ABE without executing any bilinear pairing computation. Compared to previous schemes, our scheme has a better performance in expressiveness of access policy and computational efficiency.
Chen, Hung-Ming; Lo, Jung-Wen; Yeh, Chang-Kuo
2012-12-01
The rapidly increased availability of always-on broadband telecommunication environments and lower-cost vital signs monitoring devices bring the advantages of telemedicine directly into the patient's home. Hence, the control of access to remote medical servers' resources has become a crucial challenge. A secure authentication scheme between the medical server and remote users is therefore needed to safeguard data integrity, confidentiality and to ensure availability. Recently, many authentication schemes that use low-cost mobile devices have been proposed to meet these requirements. In contrast to previous schemes, Khan et al. proposed a dynamic ID-based remote user authentication scheme that reduces computational complexity and includes features such as a provision for the revocation of lost or stolen smart cards and a time expiry check for the authentication process. However, Khan et al.'s scheme has some security drawbacks. To remedy theses, this study proposes an enhanced authentication scheme that overcomes the weaknesses inherent in Khan et al.'s scheme and demonstrated this scheme is more secure and robust for use in a telecare medical information system.
A novel semi-quantum secret sharing scheme based on Bell states
NASA Astrophysics Data System (ADS)
Yin, Aihan; Wang, Zefan; Fu, Fangbo
2017-05-01
A semi-quantum secret sharing (SQSS) scheme based on Bell states is proposed in this paper. The sender who can perform any relevant quantum operations uses Bell states to share the secret keys with her participants that are limited to perform classical operations on the transmitted qubits. It is found that our scheme is easy to generalize from three parties to multiparty and more efficient than the previous schemes [Q. Li, W. H. Chan and D. Y. Long, Phys. Rev. A 82 (2010) 022303; L. Z. Li, D. W. Qiu and P. Mateus, J. Phys. A: Math. Theor. 26 (2013) 045304; C. Xie, L. Z. Li and D. W. Qiu, Int. J. Theor. Phys. 54 (2015) 3819].
One-dimensional high-order compact method for solving Euler's equations
NASA Astrophysics Data System (ADS)
Mohamad, M. A. H.; Basri, S.; Basuno, B.
2012-06-01
In the field of computational fluid dynamics, many numerical algorithms have been developed to simulate inviscid, compressible flows problems. Among those most famous and relevant are based on flux vector splitting and Godunov-type schemes. Previously, this system was developed through computational studies by Mawlood [1]. However the new test cases for compressible flows, the shock tube problems namely the receding flow and shock waves were not investigated before by Mawlood [1]. Thus, the objective of this study is to develop a high-order compact (HOC) finite difference solver for onedimensional Euler equation. Before developing the solver, a detailed investigation was conducted to assess the performance of the basic third-order compact central discretization schemes. Spatial discretization of the Euler equation is based on flux-vector splitting. From this observation, discretization of the convective flux terms of the Euler equation is based on a hybrid flux-vector splitting, known as the advection upstream splitting method (AUSM) scheme which combines the accuracy of flux-difference splitting and the robustness of flux-vector splitting. The AUSM scheme is based on the third-order compact scheme to the approximate finite difference equation was completely analyzed consequently. In one-dimensional problem for the first order schemes, an explicit method is adopted by using time integration method. In addition to that, development and modification of source code for the one-dimensional flow is validated with four test cases namely, unsteady shock tube, quasi-one-dimensional supersonic-subsonic nozzle flow, receding flow and shock waves in shock tubes. From these results, it was also carried out to ensure that the definition of Riemann problem can be identified. Further analysis had also been done in comparing the characteristic of AUSM scheme against experimental results, obtained from previous works and also comparative analysis with computational results generated by van Leer, KFVS and AUSMPW schemes. Furthermore, there is a remarkable improvement with the extension of the AUSM scheme from first-order to third-order accuracy in terms of shocks, contact discontinuities and rarefaction waves.
Hazard-Ranking of Agricultural Pesticides for Chronic Health Effects in Yuma County, Arizona
Sugeng, Anastasia J.; Beamer, Paloma I.; Lutz, Eric A.; Rosales, Cecilia B.
2013-01-01
With thousands of pesticides registered by the United States Environmental Protection Agency, it not feasible to sample for all pesticides applied in agricultural communities. Hazard-ranking pesticides based on use, toxicity, and exposure potential can help prioritize community-specific pesticide hazards. This study applied hazard-ranking schemes for cancer, endocrine disruption, and reproductive/developmental toxicity in Yuma County, Arizona. An existing cancer hazard-ranking scheme was modified, and novel schemes for endocrine disruption and reproductive/developmental toxicity were developed to rank pesticide hazards. The hazard-ranking schemes accounted for pesticide use, toxicity, and exposure potential based on chemical properties of each pesticide. Pesticides were ranked as hazards with respect to each health effect, as well as overall chronic health effects. The highest hazard-ranked pesticides for overall chronic health effects were maneb, metam sodium, trifluralin, pronamide, and bifenthrin. The relative pesticide rankings were unique for each health effect. The highest hazard-ranked pesticides differed from those most heavily applied, as well as from those previously detected in Yuma homes over a decade ago. The most hazardous pesticides for cancer in Yuma County, Arizona were also different from a previous hazard-ranking applied in California. Hazard-ranking schemes that take into account pesticide use, toxicity, and exposure potential can help prioritize pesticides of greatest health risk in agricultural communities. This study is the first to provide pesticide hazard-rankings for endocrine disruption and reproductive/developmental toxicity based on use, toxicity, and exposure potential. These hazard-ranking schemes can be applied to other agricultural communities for prioritizing community-specific pesticide hazards to target decreasing health risk. PMID:23783270
Lee, Tian-Fu
2014-12-01
Telecare medicine information systems provide a communicating platform for accessing remote medical resources through public networks, and help health care workers and medical personnel to rapidly making correct clinical decisions and treatments. An authentication scheme for data exchange in telecare medicine information systems enables legal users in hospitals and medical institutes to establish a secure channel and exchange electronic medical records or electronic health records securely and efficiently. This investigation develops an efficient and secure verified-based three-party authentication scheme by using extended chaotic maps for data exchange in telecare medicine information systems. The proposed scheme does not require server's public keys and avoids time-consuming modular exponential computations and scalar multiplications on elliptic curve used in previous related approaches. Additionally, the proposed scheme is proven secure in the random oracle model, and realizes the lower bounds of messages and rounds in communications. Compared to related verified-based approaches, the proposed scheme not only possesses higher security, but also has lower computational cost and fewer transmissions. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kupferman, R.
The author presents a numerical study of the axisymmetric Couette-Taylor problem using a finite difference scheme. The scheme is based on a staggered version of a second-order central-differencing method combined with a discrete Hodge projection. The use of central-differencing operators obviates the need to trace the characteristic flow associated with the hyperbolic terms. The result is a simple and efficient scheme which is readily adaptable to other geometries and to more complicated flows. The scheme exhibits competitive performance in terms of accuracy, resolution, and robustness. The numerical results agree accurately with linear stability theory and with previous numerical studies.
Li, Chun-Ta; Wu, Tsu-Yang; Chen, Chin-Ling; Lee, Cheng-Chi; Chen, Chien-Ming
2017-06-23
In recent years, with the increase in degenerative diseases and the aging population in advanced countries, demands for medical care of older or solitary people have increased continually in hospitals and healthcare institutions. Applying wireless sensor networks for the IoT-based telemedicine system enables doctors, caregivers or families to monitor patients' physiological conditions at anytime and anyplace according to the acquired information. However, transmitting physiological data through the Internet concerns the personal privacy of patients. Therefore, before users can access medical care services in IoT-based medical care system, they must be authenticated. Typically, user authentication and data encryption are most critical for securing network communications over a public channel between two or more participants. In 2016, Liu and Chung proposed a bilinear pairing-based password authentication scheme for wireless healthcare sensor networks. They claimed their authentication scheme cannot only secure sensor data transmission, but also resist various well-known security attacks. In this paper, we demonstrate that Liu-Chung's scheme has some security weaknesses, and we further present an improved secure authentication and data encryption scheme for the IoT-based medical care system, which can provide user anonymity and prevent the security threats of replay and password/sensed data disclosure attacks. Moreover, we modify the authentication process to reduce redundancy in protocol design, and the proposed scheme is more efficient in performance compared with previous related schemes. Finally, the proposed scheme is provably secure in the random oracle model under ECDHP.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bran R. (Technical Monitor)
2002-01-01
We present high-order semi-discrete central-upwind numerical schemes for approximating solutions of multi-dimensional Hamilton-Jacobi (HJ) equations. This scheme is based on the use of fifth-order central interpolants like those developed in [1], in fluxes presented in [3]. These interpolants use the weighted essentially nonoscillatory (WENO) approach to avoid spurious oscillations near singularities, and become "central-upwind" in the semi-discrete limit. This scheme provides numerical approximations whose error is as much as an order of magnitude smaller than those in previous WENO-based fifth-order methods [2, 1]. Thee results are discussed via examples in one, two and three dimensions. We also pregnant explicit N-dimensional formulas for the fluxes, discuss their monotonicity and tl!e connection between this method and that in [2].
Fault-tolerant Greenberger-Horne-Zeilinger paradox based on non-Abelian anyons.
Deng, Dong-Ling; Wu, Chunfeng; Chen, Jing-Ling; Oh, C H
2010-08-06
We propose a scheme to test the Greenberger-Horne-Zeilinger paradox based on braidings of non-Abelian anyons, which are exotic quasiparticle excitations of topological states of matter. Because topological ordered states are robust against local perturbations, this scheme is in some sense "fault-tolerant" and might close the detection inefficiency loophole problem in previous experimental tests of the Greenberger-Horne-Zeilinger paradox. In turn, the construction of the Greenberger-Horne-Zeilinger paradox reveals the nonlocal property of non-Abelian anyons. Our results indicate that the non-Abelian fractional statistics is a pure quantum effect and cannot be described by local realistic theories. Finally, we present a possible experimental implementation of the scheme based on the anyonic interferometry technologies.
A gas-kinetic BGK scheme for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Xu, Kun
2000-01-01
This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.
Tan, Zuowen
2014-03-01
The telecare medicine information system enables the patients gain health monitoring at home and access medical services over internet or mobile networks. In recent years, the schemes based on cryptography have been proposed to address the security and privacy issues in the telecare medicine information systems. However, many schemes are insecure or they have low efficiency. Recently, Awasthi and Srivastava proposed a three-factor authentication scheme for telecare medicine information systems. In this paper, we show that their scheme is vulnerable to the reflection attacks. Furthermore, it fails to provide three-factor security and the user anonymity. We propose a new three-factor authentication scheme for the telecare medicine information systems. Detailed analysis demonstrates that the proposed scheme provides mutual authentication, server not knowing password and freedom of password, biometric update and three-factor security. Moreover, the new scheme provides the user anonymity. As compared with the previous three-factor authentication schemes, the proposed scheme is more secure and practical.
NASA Astrophysics Data System (ADS)
Shi, Yu; Liang, Long; Ge, Hai-Wen; Reitz, Rolf D.
2010-03-01
Acceleration of the chemistry solver for engine combustion is of much interest due to the fact that in practical engine simulations extensive computational time is spent solving the fuel oxidation and emission formation chemistry. A dynamic adaptive chemistry (DAC) scheme based on a directed relation graph error propagation (DRGEP) method has been applied to study homogeneous charge compression ignition (HCCI) engine combustion with detailed chemistry (over 500 species) previously using an R-value-based breadth-first search (RBFS) algorithm, which significantly reduced computational times (by as much as 30-fold). The present paper extends the use of this on-the-fly kinetic mechanism reduction scheme to model combustion in direct-injection (DI) engines. It was found that the DAC scheme becomes less efficient when applied to DI engine simulations using a kinetic mechanism of relatively small size and the accuracy of the original DAC scheme decreases for conventional non-premixed combustion engine. The present study also focuses on determination of search-initiating species, involvement of the NOx chemistry, selection of a proper error tolerance, as well as treatment of the interaction of chemical heat release and the fuel spray. Both the DAC schemes were integrated into the ERC KIVA-3v2 code, and simulations were conducted to compare the two schemes. In general, the present DAC scheme has better efficiency and similar accuracy compared to the previous DAC scheme. The efficiency depends on the size of the chemical kinetics mechanism used and the engine operating conditions. For cases using a small n-heptane kinetic mechanism of 34 species, 30% of the computational time is saved, and 50% for a larger n-heptane kinetic mechanism of 61 species. The paper also demonstrates that by combining the present DAC scheme with an adaptive multi-grid chemistry (AMC) solver, it is feasible to simulate a direct-injection engine using a detailed n-heptane mechanism with 543 species with practical computer time.
Quantum money with nearly optimal error tolerance
NASA Astrophysics Data System (ADS)
Amiri, Ryan; Arrazola, Juan Miguel
2017-06-01
We present a family of quantum money schemes with classical verification which display a number of benefits over previous proposals. Our schemes are based on hidden matching quantum retrieval games and they tolerate noise up to 23 % , which we conjecture reaches 25 % asymptotically as the dimension of the underlying hidden matching states is increased. Furthermore, we prove that 25 % is the maximum tolerable noise for a wide class of quantum money schemes with classical verification, meaning our schemes are almost optimally noise tolerant. We use methods in semidefinite programming to prove security in a substantially different manner to previous proposals, leading to two main advantages: first, coin verification involves only a constant number of states (with respect to coin size), thereby allowing for smaller coins; second, the reusability of coins within our scheme grows linearly with the size of the coin, which is known to be optimal. Last, we suggest methods by which the coins in our protocol could be implemented using weak coherent states and verified using existing experimental techniques, even in the presence of detector inefficiencies.
NASA Astrophysics Data System (ADS)
Belazi, Akram; Abd El-Latif, Ahmed A.; Diaconu, Adrian-Viorel; Rhouma, Rhouma; Belghith, Safya
2017-01-01
In this paper, a new chaos-based partial image encryption scheme based on Substitution-boxes (S-box) constructed by chaotic system and Linear Fractional Transform (LFT) is proposed. It encrypts only the requisite parts of the sensitive information in Lifting-Wavelet Transform (LWT) frequency domain based on hybrid of chaotic maps and a new S-box. In the proposed encryption scheme, the characteristics of confusion and diffusion are accomplished in three phases: block permutation, substitution, and diffusion. Then, we used dynamic keys instead of fixed keys used in other approaches, to control the encryption process and make any attack impossible. The new S-box was constructed by mixing of chaotic map and LFT to insure the high confidentiality in the inner encryption of the proposed approach. In addition, the hybrid compound of S-box and chaotic systems strengthened the whole encryption performance and enlarged the key space required to resist the brute force attacks. Extensive experiments were conducted to evaluate the security and efficiency of the proposed approach. In comparison with previous schemes, the proposed cryptosystem scheme showed high performances and great potential for prominent prevalence in cryptographic applications.
Hazard-ranking of agricultural pesticides for chronic health effects in Yuma County, Arizona.
Sugeng, Anastasia J; Beamer, Paloma I; Lutz, Eric A; Rosales, Cecilia B
2013-10-01
With thousands of pesticides registered by the United States Environmental Protection Agency, it not feasible to sample for all pesticides applied in agricultural communities. Hazard-ranking pesticides based on use, toxicity, and exposure potential can help prioritize community-specific pesticide hazards. This study applied hazard-ranking schemes for cancer, endocrine disruption, and reproductive/developmental toxicity in Yuma County, Arizona. An existing cancer hazard-ranking scheme was modified, and novel schemes for endocrine disruption and reproductive/developmental toxicity were developed to rank pesticide hazards. The hazard-ranking schemes accounted for pesticide use, toxicity, and exposure potential based on chemical properties of each pesticide. Pesticides were ranked as hazards with respect to each health effect, as well as overall chronic health effects. The highest hazard-ranked pesticides for overall chronic health effects were maneb, metam-sodium, trifluralin, pronamide, and bifenthrin. The relative pesticide rankings were unique for each health effect. The highest hazard-ranked pesticides differed from those most heavily applied, as well as from those previously detected in Yuma homes over a decade ago. The most hazardous pesticides for cancer in Yuma County, Arizona were also different from a previous hazard-ranking applied in California. Hazard-ranking schemes that take into account pesticide use, toxicity, and exposure potential can help prioritize pesticides of greatest health risk in agricultural communities. This study is the first to provide pesticide hazard-rankings for endocrine disruption and reproductive/developmental toxicity based on use, toxicity, and exposure potential. These hazard-ranking schemes can be applied to other agricultural communities for prioritizing community-specific pesticide hazards to target decreasing health risk. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Myhill, Elizabeth A.; Boss, Alan P.
1993-01-01
In Boss & Myhill (1992) we described the derivation and testing of a spherical coordinate-based scheme for solving the hydrodynamic equations governing the gravitational collapse of nonisothermal, nonmagnetic, inviscid, radiative, three-dimensional protostellar clouds. Here we discuss a Cartesian coordinate-based scheme based on the same set of hydrodynamic equations. As with the spherical coorrdinate-based code, the Cartesian coordinate-based scheme employs explicit Eulerian methods which are both spatially and temporally second-order accurate. We begin by describing the hydrodynamic equations in Cartesian coordinates and the numerical methods used in this particular code. Following Finn & Hawley (1989), we pay special attention to the proper implementations of high-order accuracy, finite difference methods. We evaluate the ability of the Cartesian scheme to handle shock propagation problems, and through convergence testing, we show that the code is indeed second-order accurate. To compare the Cartesian scheme discussed here with the spherical coordinate-based scheme discussed in Boss & Myhill (1992), the two codes are used to calculate the standard isothermal collapse test case described by Bodenheimer & Boss (1981). We find that with the improved codes, the intermediate bar-configuration found previously disappears, and the cloud fragments directly into a binary protostellar system. Finally, we present the results from both codes of a new test for nonisothermal protostellar collapse.
Bidirectional teleportation of a pure EPR state by using GHZ states
NASA Astrophysics Data System (ADS)
Hassanpour, Shima; Houshmand, Monireh
2016-02-01
In the present paper, a novel bidirectional quantum teleportation protocol is proposed. By using entanglement swapping technique, two GHZ states are shared as a quantum channel between Alice and Bob as legitimate users. In this scheme, based on controlled-not operation, single-qubit measurement, and appropriate unitary operations, two users can simultaneously transmit a pure EPR state to each other, While, in the previous protocols, the users can just teleport a single-qubit state to each other via more than four-qubit state. Therefore, the proposed scheme is economical compared with previous protocols.
NASA Astrophysics Data System (ADS)
Wang, Dong; Hu, You-Di; Wang, Zhe-Qiang; Ye, Liu
2015-06-01
We develop two efficient measurement-based schemes for remotely preparing arbitrary three- and four-particle W-class entangled states by utilizing genuine tripartite Greenberg-Horn-Zeilinger-type states as quantum channels, respectively. Through appropriate local operations and classical communication, the desired states can be faithfully retrieved at the receiver's place with certain probability. Compared with the previously existing schemes, the success probability in current schemes is greatly increased. Moreover, the required classical communication cost is calculated as well. Further, several attractive discussions on the properties of the presented schemes, including the success probability and reducibility, are made. Remarkably, the proposed schemes can be faithfully achieved with unity total success probability when the employed channels are reduced into maximally entangled ones.
Nagy-Soper subtraction scheme for multiparton final states
NASA Astrophysics Data System (ADS)
Chung, Cheng-Han; Robens, Tania
2013-04-01
In this work, we present the extension of an alternative subtraction scheme for next-to-leading order QCD calculations to the case of an arbitrary number of massless final state partons. The scheme is based on the splitting kernels of an improved parton shower and comes with a reduced number of final state momentum mappings. While a previous publication including the setup of the scheme has been restricted to cases with maximally two massless partons in the final state, we here provide the final state real emission and integrated subtraction terms for processes with any number of massless partons. We apply our scheme to three jet production at lepton colliders at next-to-leading order and present results for the differential C parameter distribution.
Sound beam manipulation based on temperature gradients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, Feng; School of Physics & Electronic Engineering, Changshu Institute of Technology, Changshu 215500; Quan, Li
Previous research with temperature gradients has shown the feasibility of controlling airborne sound propagation. Here, we present a temperature gradients based airborne sound manipulation schemes: a cylindrical acoustic omnidirectional absorber (AOA). The proposed AOA has high absorption performance which can almost completely absorb the incident wave. Geometric acoustics is used to obtain the refractive index distributions with different radii, which is then utilized to deduce the desired temperature gradients. Since resonant units are not applied in the scheme, its working bandwidth is expected to be broadband. The scheme is temperature-tuned and easy to realize, which is of potential interest tomore » fields such as noise control or acoustic cloaking.« less
Wu, Tsu-Yang; Chen, Chin-Ling; Lee, Cheng-Chi; Chen, Chien-Ming
2017-01-01
In recent years, with the increase in degenerative diseases and the aging population in advanced countries, demands for medical care of older or solitary people have increased continually in hospitals and healthcare institutions. Applying wireless sensor networks for the IoT-based telemedicine system enables doctors, caregivers or families to monitor patients’ physiological conditions at anytime and anyplace according to the acquired information. However, transmitting physiological data through the Internet concerns the personal privacy of patients. Therefore, before users can access medical care services in IoT-based medical care system, they must be authenticated. Typically, user authentication and data encryption are most critical for securing network communications over a public channel between two or more participants. In 2016, Liu and Chung proposed a bilinear pairing-based password authentication scheme for wireless healthcare sensor networks. They claimed their authentication scheme cannot only secure sensor data transmission, but also resist various well-known security attacks. In this paper, we demonstrate that Liu–Chung’s scheme has some security weaknesses, and we further present an improved secure authentication and data encryption scheme for the IoT-based medical care system, which can provide user anonymity and prevent the security threats of replay and password/sensed data disclosure attacks. Moreover, we modify the authentication process to reduce redundancy in protocol design, and the proposed scheme is more efficient in performance compared with previous related schemes. Finally, the proposed scheme is provably secure in the random oracle model under ECDHP. PMID:28644381
Demonstration of spatial-light-modulation-based four-wave mixing in cold atoms
NASA Astrophysics Data System (ADS)
Juo, Jz-Yuan; Lin, Jia-Kang; Cheng, Chin-Yao; Liu, Zi-Yu; Yu, Ite A.; Chen, Yong-Fan
2018-05-01
Long-distance quantum optical communications usually require efficient wave-mixing processes to convert the wavelengths of single photons. Many quantum applications based on electromagnetically induced transparency (EIT) have been proposed and demonstrated at the single-photon level, such as quantum memories, all-optical transistors, and cross-phase modulations. However, EIT-based four-wave mixing (FWM) in a resonant double-Λ configuration has a maximum conversion efficiency (CE) of 25% because of absorptive loss due to spontaneous emission. An improved scheme using spatially modulated intensities of two control fields has been theoretically proposed to overcome this conversion limit. In this study, we first demonstrate wavelength conversion from 780 to 795 nm with a 43% CE by using this scheme at an optical density (OD) of 19 in cold 87Rb atoms. According to the theoretical model, the CE in the proposed scheme can further increase to 96% at an OD of 240 under ideal conditions, thereby attaining an identical CE to that of the previous nonresonant double-Λ scheme at half the OD. This spatial-light-modulation-based FWM scheme can achieve a near-unity CE, thus providing an easy method of implementing an efficient quantum wavelength converter for all-optical quantum information processing.
Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias
2013-07-25
We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01% can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.
Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias
2014-03-01
We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01 percent can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.
Li, Chun-Ta; Lee, Cheng-Chi; Weng, Chi-Yao
2014-09-01
Telecare medicine information system (TMIS) is widely used for providing a convenient and efficient communicating platform between patients at home and physicians at medical centers or home health care (HHC) organizations. To ensure patient privacy, in 2013, Hao et al. proposed a chaotic map based authentication scheme with user anonymity for TMIS. Later, Lee showed that Hao et al.'s scheme is in no provision for providing fairness in session key establishment and gave an efficient user authentication and key agreement scheme using smart cards, in which only few hashing and Chebyshev chaotic map operations are required. In addition, Jiang et al. discussed that Hao et al.'s scheme can not resist stolen smart card attack and they further presented an improved scheme which attempts to repair the security pitfalls found in Hao et al.'s scheme. In this paper, we found that both Lee's and Jiang et al.'s authentication schemes have a serious security problem in that a registered user's secret parameters may be intentionally exposed to many non-registered users and this problem causing the service misuse attack. Therefore, we propose a slight modification on Lee's scheme to prevent the shortcomings. Compared with previous schemes, our improved scheme not only inherits the advantages of Lee's and Jiang et al.'s authentication schemes for TMIS but also remedies the serious security weakness of not being able to withstand service misuse attack.
Shi, Yan; Wang, Hao Gang; Li, Long; Chan, Chi Hou
2008-10-01
A multilevel Green's function interpolation method based on two kinds of multilevel partitioning schemes--the quasi-2D and the hybrid partitioning scheme--is proposed for analyzing electromagnetic scattering from objects comprising both conducting and dielectric parts. The problem is formulated using the surface integral equation for homogeneous dielectric and conducting bodies. A quasi-2D multilevel partitioning scheme is devised to improve the efficiency of the Green's function interpolation. In contrast to previous multilevel partitioning schemes, noncubic groups are introduced to discretize the whole EM structure in this quasi-2D multilevel partitioning scheme. Based on the detailed analysis of the dimension of the group in this partitioning scheme, a hybrid quasi-2D/3D multilevel partitioning scheme is proposed to effectively handle objects with fine local structures. Selection criteria for some key parameters relating to the interpolation technique are given. The proposed algorithm is ideal for the solution of problems involving objects such as missiles, microstrip antenna arrays, photonic bandgap structures, etc. Numerical examples are presented to show that CPU time is between O(N) and O(N log N) while the computer memory requirement is O(N).
A cloud detection scheme for the Chinese Carbon Dioxide Observation Satellite (TANSAT)
NASA Astrophysics Data System (ADS)
Wang, Xi; Guo, Zheng; Huang, Yipeng; Fan, Hongjie; Li, Wanbiao
2017-01-01
Cloud detection is an essential preprocessing step for retrieving carbon dioxide from satellite observations of reflected sunlight. During the pre-launch study of the Chinese Carbon Dioxide Observation Satellite (TANSAT), a cloud-screening scheme was presented for the Cloud and Aerosol Polarization Imager (CAPI), which only performs measurements in five channels located in the visible to near-infrared regions of the spectrum. The scheme for CAPI, based on previous cloudscreening algorithms, defines a method to regroup individual threshold tests for each pixel in a scene according to the derived clear confidence level. This scheme is proven to be more effective for sensors with few channels. The work relies upon the radiance data from the Visible and Infrared Radiometer (VIRR) onboard the Chinese FengYun-3A Polar-orbiting Meteorological Satellite (FY-3A), which uses four wavebands similar to that of CAPI and can serve as a proxy for its measurements. The scheme has been applied to a number of the VIRR scenes over four target areas (desert, snow, ocean, forest) for all seasons. To assess the screening results, comparisons against the cloud-screening product from MODIS are made. The evaluation suggests that the proposed scheme inherits the advantages of schemes described in previous publications and shows improved cloud-screening results. A seasonal analysis reveals that this scheme provides better performance during warmer seasons, except for observations over oceans, where results are much better in colder seasons.
Consistent forcing scheme in the cascaded lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Fei, Linlin; Luo, Kai Hong
2017-11-01
In this paper, we give an alternative derivation for the cascaded lattice Boltzmann method (CLBM) within a general multiple-relaxation-time (MRT) framework by introducing a shift matrix. When the shift matrix is a unit matrix, the CLBM degrades into an MRT LBM. Based on this, a consistent forcing scheme is developed for the CLBM. The consistency of the nonslip rule, the second-order convergence rate in space, and the property of isotropy for the consistent forcing scheme is demonstrated through numerical simulations of several canonical problems. Several existing forcing schemes previously used in the CLBM are also examined. The study clarifies the relation between MRT LBM and CLBM under a general framework.
NASA Astrophysics Data System (ADS)
Zhang, Chongfu; Qiu, Kun; Xu, Bo; Ling, Yun
2008-05-01
This paper proposes an all-optical label processing scheme that uses the multiple optical orthogonal codes sequences (MOOCS)-based optical label for optical packet switching (OPS) (MOOCS-OPS) networks. In this scheme, each MOOCS is a permutation or combination of the multiple optical orthogonal codes (MOOC) selected from the multiple-groups optical orthogonal codes (MGOOC). Following a comparison of different optical label processing (OLP) schemes, the principles of MOOCS-OPS network are given and analyzed. Firstly, theoretical analyses are used to prove that MOOCS is able to greatly enlarge the number of available optical labels when compared to the previous single optical orthogonal code (SOOC) for OPS (SOOC-OPS) network. Then, the key units of the MOOCS-based optical label packets, including optical packet generation, optical label erasing, optical label extraction and optical label rewriting etc., are given and studied. These results are used to verify that the proposed MOOCS-OPS scheme is feasible.
East Asian winter monsoon forecasting schemes based on the NCEP's climate forecast system
NASA Astrophysics Data System (ADS)
Tian, Baoqiang; Fan, Ke; Yang, Hongqing
2017-12-01
The East Asian winter monsoon (EAWM) is the major climate system in the Northern Hemisphere during boreal winter. In this study, we developed two schemes to improve the forecasting skill of the interannual variability of the EAWM index (EAWMI) using the interannual increment prediction method, also known as the DY method. First, we found that version 2 of the NCEP's Climate Forecast System (CFSv2) showed higher skill in predicting the EAWMI in DY form than not. So, based on the advantage of the DY method, Scheme-I was obtained by adding the EAWMI DY predicted by CFSv2 to the observed EAWMI in the previous year. This scheme showed higher forecasting skill than CFSv2. Specifically, during 1983-2016, the temporal correlation coefficient between the Scheme-I-predicted and observed EAWMI was 0.47, exceeding the 99% significance level, with the root-mean-square error (RMSE) decreased by 12%. The autumn Arctic sea ice and North Pacific sea surface temperature (SST) are two important external forcing factors for the interannual variability of the EAWM. Therefore, a second (hybrid) prediction scheme, Scheme-II, was also developed. This scheme not only involved the EAWMI DY of CFSv2, but also the sea-ice concentration (SIC) observed the previous autumn in the Laptev and East Siberian seas and the temporal coefficients of the third mode of the North Pacific SST in DY form. We found that a negative SIC anomaly in the preceding autumn over the Laptev and the East Siberian seas could lead to a significant enhancement of the Aleutian low and East Asian westerly jet in the following winter. However, the intensity of the winter Siberian high was mainly affected by the third mode of the North Pacific autumn SST. Scheme-I and Scheme-II also showed higher predictive ability for the EAWMI in negative anomaly years compared to CFSv2. More importantly, the improvement in the prediction skill of the EAWMI by the new schemes, especially for Scheme-II, could enhance the forecasting skill of the winter 2-m air temperature (T-2m) in most parts of China, as well as the intensity of the Aleutian low and Siberian high in winter. The new schemes provide a theoretical basis for improving the prediction of winter climate in China.
The space-dependent model and output characteristics of intra-cavity pumped dual-wavelength lasers
NASA Astrophysics Data System (ADS)
He, Jin-Qi; Dong, Yuan; Zhang, Feng-Dong; Yu, Yong-Ji; Jin, Guang-Yong; Liu, Li-Da
2016-01-01
The intra-cavity pumping scheme which is used to simultaneously generate dual-wavelength lasers was proposed and published by us and the space-independent model of quasi-three-level and four-level intra-cavity pumped dual-wavelength lasers was constructed based on this scheme. In this paper, to make the previous study more rigorous, the space-dependent model is adopted. As an example, the output characteristics of 946 nm and 1064 nm dual-wavelength lasers under the conditions of different output mirror transmittances are numerically simulated by using the derived formula and the results are nearly identical to what was previously reported.
A blur-invariant local feature for motion blurred image matching
NASA Astrophysics Data System (ADS)
Tong, Qiang; Aoki, Terumasa
2017-07-01
Image matching between a blurred (caused by camera motion, out of focus, etc.) image and a non-blurred image is a critical task for many image/video applications. However, most of the existing local feature schemes fail to achieve this work. This paper presents a blur-invariant descriptor and a novel local feature scheme including the descriptor and the interest point detector based on moment symmetry - the authors' previous work. The descriptor is based on a new concept - center peak moment-like element (CPME) which is robust to blur and boundary effect. Then by constructing CPMEs, the descriptor is also distinctive and suitable for image matching. Experimental results show our scheme outperforms state of the art methods for blurred image matching
Threshold secret sharing scheme based on phase-shifting interferometry.
Deng, Xiaopeng; Shi, Zhengang; Wen, Wei
2016-11-01
We propose a new method for secret image sharing with the (3,N) threshold scheme based on phase-shifting interferometry. The secret image, which is multiplied with an encryption key in advance, is first encrypted by using Fourier transformation. Then, the encoded image is shared into N shadow images based on the recording principle of phase-shifting interferometry. Based on the reconstruction principle of phase-shifting interferometry, any three or more shadow images can retrieve the secret image, while any two or fewer shadow images cannot obtain any information of the secret image. Thus, a (3,N) threshold secret sharing scheme can be implemented. Compared with our previously reported method, the algorithm of this paper is suited for not only a binary image but also a gray-scale image. Moreover, the proposed algorithm can obtain a larger threshold value t. Simulation results are presented to demonstrate the feasibility of the proposed method.
Chaos based video encryption using maps and Ikeda time delay system
NASA Astrophysics Data System (ADS)
Valli, D.; Ganesan, K.
2017-12-01
Chaos based cryptosystems are an efficient method to deal with improved speed and highly secured multimedia encryption because of its elegant features, such as randomness, mixing, ergodicity, sensitivity to initial conditions and control parameters. In this paper, two chaos based cryptosystems are proposed: one is the higher-dimensional 12D chaotic map and the other is based on the Ikeda delay differential equation (DDE) suitable for designing a real-time secure symmetric video encryption scheme. These encryption schemes employ a substitution box (S-box) to diffuse the relationship between pixels of plain video and cipher video along with the diffusion of current input pixel with the previous cipher pixel, called cipher block chaining (CBC). The proposed method enhances the robustness against statistical, differential and chosen/known plain text attacks. Detailed analysis is carried out in this paper to demonstrate the security and uniqueness of the proposed scheme.
The Solution to Pollution is Distribution: Design Your Own Chaotic Flow
NASA Astrophysics Data System (ADS)
Tigera, R. G.; Roth, E. J.; Neupauer, R.; Mays, D. C.
2015-12-01
Plume spreading promotes the molecular mixing that drives chemical reactions in porous media in general, and remediation reactions in groundwater aquifers in particular. Theoretical analysis suggests that engineered injection and extraction, a specific sequence of pumping through wells surrounding a contaminant plume, can improve groundwater remediation through chaotic advection. Selection of an engineered injection and extraction scheme is difficult, however, because the engineer is faced with the difficulty of recommending a pumping scheme for a contaminated site without having any previous knowledge of how the scheme will perform. To address this difficulty, this presentation describes a Graphical User Interface (GUI) designed to help engineers develop, test, and observe pumping schemes as described in previous research (Mays, D.C. and Neupauer, R.M., 2012, Plume spreading in groundwater by stretching and folding, Water Resour. Res., 48, W07501, doi:10.1029/2011WR011567). The inputs allow the user to manipulate the model conditions such as number of wells, plume size, and pumping scheme. Plume evolution is modeled, assuming no diffusion or dispersion, using analytical solutions for injection or extraction through individual wells or pairs or wells (i.e., dipoles). Using the GUI, an engineered injection and extraction scheme can be determined that best fits the remediation needs of the contaminated site. By creating multiple injection and extraction schemes, the user can learn about the plume shapes created from different schemes and, ultimately, recommend a pumping scheme based on some experience of fluid flow as shown in the GUI. The pumping schemes developed through this GUI are expected to guide more advanced modeling and laboratory studies that account for the crucial role of dispersion in groundwater remediation.
New auto-tuning technique for the hydrogen maser
NASA Technical Reports Server (NTRS)
Sydnor, R. L.; Maleki, L.
1983-01-01
Auto-tuning of the maser cavity compensates for cavity pulling effect, and other sources of contribution to the long term frequency drift. Schemes previously proposed for the maser cavity auto-tuning can have adverse effects on the performance of the maser. A new scheme is proposed based on the phase relationship between the electric and the magnetic fields inside the cavity. This technique has the desired feature of auto-tuning the cavity with a very high sensitivity and without disturbing the maser performance. Some approaches for the implementation of this scheme and possible areas of difficulty are examined.
Chang, I-Pin; Lee, Tian-Fu; Lin, Tsung-Hung; Liu, Chuan-Ming
2015-11-30
Key agreements that use only password authentication are convenient in communication networks, but these key agreement schemes often fail to resist possible attacks, and therefore provide poor security compared with some other authentication schemes. To increase security, many authentication and key agreement schemes use smartcard authentication in addition to passwords. Thus, two-factor authentication and key agreement schemes using smartcards and passwords are widely adopted in many applications. Vaidya et al. recently presented a two-factor authentication and key agreement scheme for wireless sensor networks (WSNs). Kim et al. observed that the Vaidya et al. scheme fails to resist gateway node bypassing and user impersonation attacks, and then proposed an improved scheme for WSNs. This study analyzes the weaknesses of the two-factor authentication and key agreement scheme of Kim et al., which include vulnerability to impersonation attacks, lost smartcard attacks and man-in-the-middle attacks, violation of session key security, and failure to protect user privacy. An efficient and secure authentication and key agreement scheme for WSNs based on the scheme of Kim et al. is then proposed. The proposed scheme not only solves the weaknesses of previous approaches, but also increases security requirements while maintaining low computational cost.
Collaborative Protection and Control Schemes for Shipboard Electrical Systems
2007-03-26
VSCs ) for fault current limiting and interruption. Revisions needed on the VSCs to perform these functions have been identified, and feasibility of this...disturbances very fast - less than 3-4 ms [3]. Next section summarizes the details of the agent based protection scheme that uses the VSC as the...fault currents. In our previous work [2, 3], it has been demonstrated that this new functionally for VSC can be achieved by proper selection of
Yu, Shidi; Liu, Xiao; Liu, Anfeng; Xiong, Naixue; Cai, Zhiping; Wang, Tian
2018-05-10
Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed ABRCD scheme shows better performance in different broadcast situations. Compared to previous schemes, the transmission delay is reduced by 41.11~78.42%, the number of broadcasts is reduced by 36.18~94.27% and the energy utilization ratio is improved up to 583.42%, while the network lifetime can be prolonged up to 274.99%.
An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks
Yu, Shidi; Liu, Xiao; Cai, Zhiping; Wang, Tian
2018-01-01
Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed ABRCD scheme shows better performance in different broadcast situations. Compared to previous schemes, the transmission delay is reduced by 41.11~78.42%, the number of broadcasts is reduced by 36.18~94.27% and the energy utilization ratio is improved up to 583.42%, while the network lifetime can be prolonged up to 274.99%. PMID:29748525
Spatially Common Sparsity Based Adaptive Channel Estimation and Feedback for FDD Massive MIMO
NASA Astrophysics Data System (ADS)
Gao, Zhen; Dai, Linglong; Wang, Zhaocheng; Chen, Sheng
2015-12-01
This paper proposes a spatially common sparsity based adaptive channel estimation and feedback scheme for frequency division duplex based massive multi-input multi-output (MIMO) systems, which adapts training overhead and pilot design to reliably estimate and feed back the downlink channel state information (CSI) with significantly reduced overhead. Specifically, a non-orthogonal downlink pilot design is first proposed, which is very different from standard orthogonal pilots. By exploiting the spatially common sparsity of massive MIMO channels, a compressive sensing (CS) based adaptive CSI acquisition scheme is proposed, where the consumed time slot overhead only adaptively depends on the sparsity level of the channels. Additionally, a distributed sparsity adaptive matching pursuit algorithm is proposed to jointly estimate the channels of multiple subcarriers. Furthermore, by exploiting the temporal channel correlation, a closed-loop channel tracking scheme is provided, which adaptively designs the non-orthogonal pilot according to the previous channel estimation to achieve an enhanced CSI acquisition. Finally, we generalize the results of the multiple-measurement-vectors case in CS and derive the Cramer-Rao lower bound of the proposed scheme, which enlightens us to design the non-orthogonal pilot signals for the improved performance. Simulation results demonstrate that the proposed scheme outperforms its counterparts, and it is capable of approaching the performance bound.
NASA Astrophysics Data System (ADS)
Wang, Jingtao; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2017-02-01
In this study, we propose the concept of judgment space to investigate the quantum-secret-sharing scheme based on local distinguishability (called LOCC-QSS). Because of the proposing of this conception, the property of orthogonal mutiqudit entangled states under restricted local operation and classical communication (LOCC) can be described more clearly. According to these properties, we reveal that, in the previous (k ,n )-threshold LOCC-QSS scheme, there are two required conditions for the selected quantum states to resist the unambiguous attack: (i) their k -level judgment spaces are orthogonal, and (ii) their (k -1 )-level judgment spaces are equal. Practically, if k
NASA Technical Reports Server (NTRS)
Noor, A. K.; Stephens, W. B.
1973-01-01
Several finite difference schemes are applied to the stress and free vibration analysis of homogeneous isotropic and layered orthotropic shells of revolution. The study is based on a form of the Sanders-Budiansky first-approximation linear shell theory modified such that the effects of shear deformation and rotary inertia are included. A Fourier approach is used in which all the shell stress resultants and displacements are expanded in a Fourier series in the circumferential direction, and the governing equations reduce to ordinary differential equations in the meridional direction. While primary attention is given to finite difference schemes used in conjunction with first order differential equation formulation, comparison is made with finite difference schemes used with other formulations. These finite difference discretization models are compared with respect to simplicity of application, convergence characteristics, and computational efficiency. Numerical studies are presented for the effects of variations in shell geometry and lamination parameters on the accuracy and convergence of the solutions obtained by the different finite difference schemes. On the basis of the present study it is shown that the mixed finite difference scheme based on the first order differential equation formulation and two interlacing grids for the different fundamental unknowns combines a number of advantages over other finite difference schemes previously reported in the literature.
Quantum cryptography without switching.
Weedbrook, Christian; Lance, Andrew M; Bowen, Warwick P; Symul, Thomas; Ralph, Timothy C; Lam, Ping Koy
2004-10-22
We propose a new coherent state quantum key distribution protocol that eliminates the need to randomly switch between measurement bases. This protocol provides significantly higher secret key rates with increased bandwidths than previous schemes that only make single quadrature measurements. It also offers the further advantage of simplicity compared to all previous protocols which, to date, have relied on switching.
Trusted measurement model based on multitenant behaviors.
Ning, Zhen-Hu; Shen, Chang-Xiang; Zhao, Yong; Liang, Peng
2014-01-01
With a fast growing pervasive computing, especially cloud computing, the behaviour measurement is at the core and plays a vital role. A new behaviour measurement tailored for Multitenants in cloud computing is needed urgently to fundamentally establish trust relationship. Based on our previous research, we propose an improved trust relationship scheme which captures the world of cloud computing where multitenants share the same physical computing platform. Here, we first present the related work on multitenant behaviour; secondly, we give the scheme of behaviour measurement where decoupling of multitenants is taken into account; thirdly, we explicitly explain our decoupling algorithm for multitenants; fourthly, we introduce a new way of similarity calculation for deviation control, which fits the coupled multitenants under study well; lastly, we design the experiments to test our scheme.
Trusted Measurement Model Based on Multitenant Behaviors
Ning, Zhen-Hu; Shen, Chang-Xiang; Zhao, Yong; Liang, Peng
2014-01-01
With a fast growing pervasive computing, especially cloud computing, the behaviour measurement is at the core and plays a vital role. A new behaviour measurement tailored for Multitenants in cloud computing is needed urgently to fundamentally establish trust relationship. Based on our previous research, we propose an improved trust relationship scheme which captures the world of cloud computing where multitenants share the same physical computing platform. Here, we first present the related work on multitenant behaviour; secondly, we give the scheme of behaviour measurement where decoupling of multitenants is taken into account; thirdly, we explicitly explain our decoupling algorithm for multitenants; fourthly, we introduce a new way of similarity calculation for deviation control, which fits the coupled multitenants under study well; lastly, we design the experiments to test our scheme. PMID:24987731
NASA Astrophysics Data System (ADS)
Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong; Zhou, Nanrun
2018-06-01
The diffractive-imaging-based encryption (DIBE) scheme has aroused wide interesting due to its compact architecture and low requirement of conditions. Nevertheless, the primary information can hardly be recovered exactly in the real applications when considering the speckle noise and potential occlusion imposed on the ciphertext. To deal with this issue, the customized data container (CDC) into DIBE is introduced and a new phase retrieval algorithm (PRA) for plaintext retrieval is proposed. The PRA, designed according to the peculiarity of the CDC, combines two key techniques from previous approaches, i.e., input-support-constraint and median-filtering. The proposed scheme can guarantee totally the reconstruction of the primary information despite heavy noise or occlusion and its effectiveness and feasibility have been demonstrated with simulation results.
NASA Astrophysics Data System (ADS)
Cavaglieri, Daniele; Bewley, Thomas
2015-04-01
Implicit/explicit (IMEX) Runge-Kutta (RK) schemes are effective for time-marching ODE systems with both stiff and nonstiff terms on the RHS; such schemes implement an (often A-stable or better) implicit RK scheme for the stiff part of the ODE, which is often linear, and, simultaneously, a (more convenient) explicit RK scheme for the nonstiff part of the ODE, which is often nonlinear. Low-storage RK schemes are especially effective for time-marching high-dimensional ODE discretizations of PDE systems on modern (cache-based) computational hardware, in which memory management is often the most significant computational bottleneck. In this paper, we develop and characterize eight new low-storage implicit/explicit RK schemes which have higher accuracy and better stability properties than the only low-storage implicit/explicit RK scheme available previously, the venerable second-order Crank-Nicolson/Runge-Kutta-Wray (CN/RKW3) algorithm that has dominated the DNS/LES literature for the last 25 years, while requiring similar storage (two, three, or four registers of length N) and comparable floating-point operations per timestep.
Tegotae-based decentralised control scheme for autonomous gait transition of snake-like robots.
Kano, Takeshi; Yoshizawa, Ryo; Ishiguro, Akio
2017-08-04
Snakes change their locomotion patterns in response to the environment. This ability is a motivation for developing snake-like robots with highly adaptive functionality. In this study, a decentralised control scheme of snake-like robots that exhibited autonomous gait transition (i.e. the transition between concertina locomotion in narrow aisles and scaffold-based locomotion on unstructured terrains) was developed. Additionally, the control scheme was validated via simulations. A key insight revealed is that these locomotion patterns were not preprogrammed but emerged by exploiting Tegotae, a concept that describes the extent to which a perceived reaction matches a generated action. Unlike local reflexive mechanisms proposed previously, the Tegotae-based feedback mechanism enabled the robot to 'selectively' exploit environments beneficial for propulsion, and generated reasonable locomotion patterns. It is expected that the results of this study can form the basis to design robots that can work under unpredictable and unstructured environments.
Lee, Kilhung
2010-01-01
This paper presents a medium access control and scheduling scheme for wireless sensor networks. It uses time trees for sending data from the sensor node to the base station. For an energy efficient operation of the sensor networks in a distributed manner, time trees are built in order to reduce the collision probability and to minimize the total energy required to send data to the base station. A time tree is a data gathering tree where the base station is the root and each sensor node is either a relaying or a leaf node of the tree. Each tree operates in a different time schedule with possibly different activation rates. Through the simulation, the proposed scheme that uses time trees shows better characteristics toward burst traffic than the previous energy and data arrival rate scheme. PMID:22319270
Unsupervised iterative detection of land mines in highly cluttered environments.
Batman, Sinan; Goutsias, John
2003-01-01
An unsupervised iterative scheme is proposed for land mine detection in heavily cluttered scenes. This scheme is based on iterating hybrid multispectral filters that consist of a decorrelating linear transform coupled with a nonlinear morphological detector. Detections extracted from the first pass are used to improve results in subsequent iterations. The procedure stops after a predetermined number of iterations. The proposed scheme addresses several weaknesses associated with previous adaptations of morphological approaches to land mine detection. Improvement in detection performance, robustness with respect to clutter inhomogeneities, a completely unsupervised operation, and computational efficiency are the main highlights of the method. Experimental results reveal excellent performance.
A hybrid CNN feature model for pulmonary nodule malignancy risk differentiation.
Wang, Huafeng; Zhao, Tingting; Li, Lihong Connie; Pan, Haixia; Liu, Wanquan; Gao, Haoqi; Han, Fangfang; Wang, Yuehai; Qi, Yifan; Liang, Zhengrong
2018-01-01
The malignancy risk differentiation of pulmonary nodule is one of the most challenge tasks of computer-aided diagnosis (CADx). Most recently reported CADx methods or schemes based on texture and shape estimation have shown relatively satisfactory on differentiating the risk level of malignancy among the nodules detected in lung cancer screening. However, the existing CADx schemes tend to detect and analyze characteristics of pulmonary nodules from a statistical perspective according to local features only. Enlightened by the currently prevailing learning ability of convolutional neural network (CNN), which simulates human neural network for target recognition and our previously research on texture features, we present a hybrid model that takes into consideration of both global and local features for pulmonary nodule differentiation using the largest public database founded by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). By comparing three types of CNN models in which two of them were newly proposed by us, we observed that the multi-channel CNN model yielded the best discrimination in capacity of differentiating malignancy risk of the nodules based on the projection of distributions of extracted features. Moreover, CADx scheme using the new multi-channel CNN model outperformed our previously developed CADx scheme using the 3D texture feature analysis method, which increased the computed area under a receiver operating characteristic curve (AUC) from 0.9441 to 0.9702.
Photonic quantum digital signatures operating over kilometer ranges in installed optical fiber
NASA Astrophysics Data System (ADS)
Collins, Robert J.; Fujiwara, Mikio; Amiri, Ryan; Honjo, Toshimori; Shimizu, Kaoru; Tamaki, Kiyoshi; Takeoka, Masahiro; Andersson, Erika; Buller, Gerald S.; Sasaki, Masahide
2016-10-01
The security of electronic communications is a topic that has gained noteworthy public interest in recent years. As a result, there is an increasing public recognition of the existence and importance of mathematically based approaches to digital security. Many of these implement digital signatures to ensure that a malicious party has not tampered with the message in transit, that a legitimate receiver can validate the identity of the signer and that messages are transferable. The security of most digital signature schemes relies on the assumed computational difficulty of solving certain mathematical problems. However, reports in the media have shown that certain implementations of such signature schemes are vulnerable to algorithmic breakthroughs and emerging quantum processing technologies. Indeed, even without quantum processors, the possibility remains that classical algorithmic breakthroughs will render these schemes insecure. There is ongoing research into information-theoretically secure signature schemes, where the security is guaranteed against an attacker with arbitrary computational resources. One such approach is quantum digital signatures. Quantum signature schemes can be made information-theoretically secure based on the laws of quantum mechanics while comparable classical protocols require additional resources such as anonymous broadcast and/or a trusted authority. Previously, most early demonstrations of quantum digital signatures required dedicated single-purpose hardware and operated over restricted ranges in a laboratory environment. Here, for the first time, we present a demonstration of quantum digital signatures conducted over several kilometers of installed optical fiber. The system reported here operates at a higher signature generation rate than previous fiber systems.
NASA Astrophysics Data System (ADS)
Peng, Cheng; Geneva, Nicholas; Guo, Zhaoli; Wang, Lian-Ping
2017-01-01
In lattice Boltzmann simulations involving moving solid boundaries, the momentum exchange between the solid and fluid phases was recently found to be not fully consistent with the principle of local Galilean invariance (GI) when the bounce-back schemes (BBS) and the momentum exchange method (MEM) are used. In the past, this inconsistency was resolved by introducing modified MEM schemes so that the overall moving-boundary algorithm could be more consistent with GI. However, in this paper we argue that the true origin of this violation of Galilean invariance (VGI) in the presence of a moving solid-fluid interface is due to the BBS itself, as the VGI error not only exists in the hydrodynamic force acting on the solid phase, but also in the boundary force exerted on the fluid phase, according to Newton's Third Law. The latter, however, has so far gone unnoticed in previously proposed modified MEM schemes. Based on this argument, we conclude that the previous modifications to the momentum exchange method are incomplete solutions to the VGI error in the lattice Boltzmann method (LBM). An implicit remedy to the VGI error in the LBM and its limitation is then revealed. To address the VGI error for a case when this implicit remedy does not exist, a bounce-back scheme based on coordinate transformation is proposed. Numerical tests in both laminar and turbulent flows show that the proposed scheme can effectively eliminate the errors associated with the usual bounce-back implementations on a no-slip solid boundary, and it can maintain an accurate momentum exchange calculation with minimal computational overhead.
Screen Miniatures as Icons for Backward Navigation in Content-Based Software.
ERIC Educational Resources Information Center
Boling, Elizabeth; Ma, Guoping; Tao, Chia-Wen; Askun, Cengiz; Green, Tim; Frick, Theodore; Schaumburg, Heike
Users of content-based software programs, including hypertexts and instructional multimedia, rely on the navigation functions provided by the designers of those program. Typical navigation schemes use abstract symbols (arrows) to label basic navigational functions like moving forward or backward through screen displays. In a previous study, the…
Enhanced Security and Pairing-free Handover Authentication Scheme for Mobile Wireless Networks
NASA Astrophysics Data System (ADS)
Chen, Rui; Shu, Guangqiang; Chen, Peng; Zhang, Lijun
2017-10-01
With the widely deployment of mobile wireless networks, we aim to propose a secure and seamless handover authentication scheme that allows users to roam freely in wireless networks without worrying about security and privacy issues. Given the open characteristic of wireless networks, safety and efficiency should be considered seriously. Several previous protocols are designed based on a bilinear pairing mapping, which is time-consuming and inefficient work, as well as unsuitable for practical situations. To address these issues, we designed a new pairing-free handover authentication scheme for mobile wireless networks. This scheme is an effective improvement of the protocol by Xu et al., which is suffer from the mobile node impersonation attack. Security analysis and simulation experiment indicate that the proposed protocol has many excellent security properties when compared with other recent similar handover schemes, such as mutual authentication and resistance to known network threats, as well as requiring lower computation and communication cost.
Chang, I-Pin; Lee, Tian-Fu; Lin, Tsung-Hung; Liu, Chuan-Ming
2015-01-01
Key agreements that use only password authentication are convenient in communication networks, but these key agreement schemes often fail to resist possible attacks, and therefore provide poor security compared with some other authentication schemes. To increase security, many authentication and key agreement schemes use smartcard authentication in addition to passwords. Thus, two-factor authentication and key agreement schemes using smartcards and passwords are widely adopted in many applications. Vaidya et al. recently presented a two-factor authentication and key agreement scheme for wireless sensor networks (WSNs). Kim et al. observed that the Vaidya et al. scheme fails to resist gateway node bypassing and user impersonation attacks, and then proposed an improved scheme for WSNs. This study analyzes the weaknesses of the two-factor authentication and key agreement scheme of Kim et al., which include vulnerability to impersonation attacks, lost smartcard attacks and man-in-the-middle attacks, violation of session key security, and failure to protect user privacy. An efficient and secure authentication and key agreement scheme for WSNs based on the scheme of Kim et al. is then proposed. The proposed scheme not only solves the weaknesses of previous approaches, but also increases security requirements while maintaining low computational cost. PMID:26633396
Optical Stabilization of a Microwave Oscillator for Fountain Clock Interrogation.
Lipphardt, Burghard; Gerginov, Vladislav; Weyers, Stefan
2017-04-01
We describe an optical frequency stabilization scheme of a microwave oscillator that is used for the interrogation of primary cesium fountain clocks. Because of its superior phase noise properties, this scheme, which is based on an ultrastable laser and a femtosecond laser frequency comb, overcomes the frequency instability limitations of fountain clocks given by the previously utilized quartz-oscillator-based frequency synthesis. The presented scheme combines the transfer of the short-term frequency instability of an optical cavity and the long-term frequency instability of a hydrogen maser to the microwave oscillator and is designed to provide continuous long-term operation for extended measurement periods of several weeks. The utilization of the twofold stabilization scheme on the one hand ensures the referencing of the fountain frequency to the hydrogen maser frequency and on the other hand results in a phase noise level of the fountain interrogation signal, which enables fountain frequency instabilities at the 2.5 ×10 -14 (τ/s) -1/2 level that are quantum projection noise limited.
LSB-based Steganography Using Reflected Gray Code for Color Quantum Images
NASA Astrophysics Data System (ADS)
Li, Panchi; Lu, Aiping
2018-02-01
At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.
Hung, Le Xuan; Canh, Ngo Trong; Lee, Sungyoung; Lee, Young-Koo; Lee, Heejo
2008-01-01
For many sensor network applications such as military or homeland security, it is essential for users (sinks) to access the sensor network while they are moving. Sink mobility brings new challenges to secure routing in large-scale sensor networks. Previous studies on sink mobility have mainly focused on efficiency and effectiveness of data dissemination without security consideration. Also, studies and experiences have shown that considering security during design time is the best way to provide security for sensor network routing. This paper presents an energy-efficient secure routing and key management for mobile sinks in sensor networks, called SCODEplus. It is a significant extension of our previous study in five aspects: (1) Key management scheme and routing protocol are considered during design time to increase security and efficiency; (2) The network topology is organized in a hexagonal plane which supports more efficiency than previous square-grid topology; (3) The key management scheme can eliminate the impacts of node compromise attacks on links between non-compromised nodes; (4) Sensor node deployment is based on Gaussian distribution which is more realistic than uniform distribution; (5) No GPS or like is required to provide sensor node location information. Our security analysis demonstrates that the proposed scheme can defend against common attacks in sensor networks including node compromise attacks, replay attacks, selective forwarding attacks, sinkhole and wormhole, Sybil attacks, HELLO flood attacks. Both mathematical and simulation-based performance evaluation show that the SCODEplus significantly reduces the communication overhead, energy consumption, packet delivery latency while it always delivers more than 97 percent of packets successfully. PMID:27873956
Hung, Le Xuan; Canh, Ngo Trong; Lee, Sungyoung; Lee, Young-Koo; Lee, Heejo
2008-12-03
For many sensor network applications such as military or homeland security, it is essential for users (sinks) to access the sensor network while they are moving. Sink mobility brings new challenges to secure routing in large-scale sensor networks. Previous studies on sink mobility have mainly focused on efficiency and effectiveness of data dissemination without security consideration. Also, studies and experiences have shown that considering security during design time is the best way to provide security for sensor network routing. This paper presents an energy-efficient secure routing and key management for mobile sinks in sensor networks, called SCODE plus . It is a significant extension of our previous study in five aspects: (1) Key management scheme and routing protocol are considered during design time to increase security and efficiency; (2) The network topology is organized in a hexagonal plane which supports more efficiency than previous square-grid topology; (3) The key management scheme can eliminate the impacts of node compromise attacks on links between non-compromised nodes; (4) Sensor node deployment is based on Gaussian distribution which is more realistic than uniform distribution; (5) No GPS or like is required to provide sensor node location information. Our security analysis demonstrates that the proposed scheme can defend against common attacks in sensor networks including node compromise attacks, replay attacks, selective forwarding attacks, sinkhole and wormhole, Sybil attacks, HELLO flood attacks. Both mathematical and simulation-based performance evaluation show that the SCODE plus significantly reduces the communication overhead, energy consumption, packet delivery latency while it always delivers more than 97 percent of packets successfully.
From WSN towards WoT: Open API Scheme Based on oneM2M Platforms.
Kim, Jaeho; Choi, Sung-Chan; Ahn, Il-Yeup; Sung, Nak-Myoung; Yun, Jaeseok
2016-10-06
Conventional computing systems have been able to be integrated into daily objects and connected to each other due to advances in computing and network technologies, such as wireless sensor networks (WSNs), forming a global network infrastructure, called the Internet of Things (IoT). To support the interconnection and interoperability between heterogeneous IoT systems, the availability of standardized, open application programming interfaces (APIs) is one of the key features of common software platforms for IoT devices, gateways, and servers. In this paper, we present a standardized way of extending previously-existing WSNs towards IoT systems, building the world of the Web of Things (WoT). Based on the oneM2M software platforms developed in the previous project, we introduce a well-designed open API scheme and device-specific thing adaptation software (TAS) enabling WSN elements, such as a wireless sensor node, to be accessed in a standardized way on a global scale. Three pilot services are implemented (i.e., a WiFi-enabled smart flowerpot, voice-based control for ZigBee-connected home appliances, and WiFi-connected AR.Drone control) to demonstrate the practical usability of the open API scheme and TAS modules. Full details on the method of integrating WSN elements into three example systems are described at the programming code level, which is expected to help future researchers in integrating their WSN systems in IoT platforms, such as oneM2M. We hope that the flexibly-deployable, easily-reusable common open API scheme and TAS-based integration method working with the oneM2M platforms will help the conventional WSNs in diverse industries evolve into the emerging WoT solutions.
From WSN towards WoT: Open API Scheme Based on oneM2M Platforms
Kim, Jaeho; Choi, Sung-Chan; Ahn, Il-Yeup; Sung, Nak-Myoung; Yun, Jaeseok
2016-01-01
Conventional computing systems have been able to be integrated into daily objects and connected to each other due to advances in computing and network technologies, such as wireless sensor networks (WSNs), forming a global network infrastructure, called the Internet of Things (IoT). To support the interconnection and interoperability between heterogeneous IoT systems, the availability of standardized, open application programming interfaces (APIs) is one of the key features of common software platforms for IoT devices, gateways, and servers. In this paper, we present a standardized way of extending previously-existing WSNs towards IoT systems, building the world of the Web of Things (WoT). Based on the oneM2M software platforms developed in the previous project, we introduce a well-designed open API scheme and device-specific thing adaptation software (TAS) enabling WSN elements, such as a wireless sensor node, to be accessed in a standardized way on a global scale. Three pilot services are implemented (i.e., a WiFi-enabled smart flowerpot, voice-based control for ZigBee-connected home appliances, and WiFi-connected AR.Drone control) to demonstrate the practical usability of the open API scheme and TAS modules. Full details on the method of integrating WSN elements into three example systems are described at the programming code level, which is expected to help future researchers in integrating their WSN systems in IoT platforms, such as oneM2M. We hope that the flexibly-deployable, easily-reusable common open API scheme and TAS-based integration method working with the oneM2M platforms will help the conventional WSNs in diverse industries evolve into the emerging WoT solutions. PMID:27782058
ESS-FH: Enhanced Security Scheme for Fast Handover in Hierarchical Mobile IPv6
NASA Astrophysics Data System (ADS)
You, Ilsun; Lee, Jong-Hyouk; Sakurai, Kouichi; Hori, Yoshiaki
Fast Handover for Hierarchical Mobile IPv6 (F-HMIPv6) that combines advantages of Fast Handover for Mobile IPv6 (FMIPv6) and Hierarchical Mobile IPv6 (HMIPv6) achieves the superior performance in terms of handover latency and signaling overhead compared with previously developed mobility protocols. However, without being secured, F-HMIPv6 is vulnerable to various security threats. In 2007, Kang and Park proposed a security scheme, which is seamlessly integrated into F-HMIPv6. In this paper, we reveal that Kang-Park's scheme cannot defend against the Denial of Service (DoS) and redirect attacks while largely relying on the group key. Then, we propose an Enhanced Security Scheme for F-HMIPv6 (ESS-FH) that achieves the strong key exchange and the key independence as well as addresses the weaknesses of Kang-Park's scheme. More importantly, it enables fast handover between different MAP domains. The proposed scheme is formally verified based on BAN-logic, and its handover latency is analyzed and compared with that of Kang-Park's scheme.
Direct carrier-envelope phase control of an amplified laser system.
Balčiūnas, Tadas; Flöry, Tobias; Baltuška, Andrius; Stanislauskas, Tomas; Antipenkov, Roman; Varanavičius, Arūnas; Steinmeyer, Günter
2014-03-15
Direct carrier-envelope phase stabilization of an Yb:KGW MOPA laser system is demonstrated with a residual phase jitter reduced to below 100 mrad, which compares favorably with previous stabilization reports, both of amplified laser systems as well as of ytterbium-based oscillators. This novel stabilization scheme relies on a frequency synthesis scheme and a feed-forward approach. The direct stabilization of a sub-MHz frequency comb from a CPA amplifier not only reduces the phase noise but also greatly simplifies the stabilization setup.
NASA Technical Reports Server (NTRS)
Simon, M.; Tkacenko, A.
2006-01-01
In a previous publication [1], an iterative closed-loop carrier synchronization scheme for binary phase-shift keyed (BPSK) modulation was proposed that was based on feeding back data decisions to the input of the loop, the purpose being to remove the modulation prior to carrier synchronization as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. The idea there was that, with sufficient independence between the received data and the decisions on it that are fed back (as would occur in an error-correction coding environment with sufficient decoding delay), a pure tone in the presence of noise would ultimately be produced (after sufficient iteration and low enough error probability) and thus could be tracked without any squaring loss. This article demonstrates that, with some modification, the same idea of iterative information reduction through decision feedback can be applied to quadrature phase-shift keyed (QPSK) modulation, something that was mentioned in the previous publication but never pursued.
Enhanced photoelectric detection of NV magnetic resonances in diamond under dual-beam excitation
NASA Astrophysics Data System (ADS)
Bourgeois, E.; Londero, E.; Buczak, K.; Hruby, J.; Gulka, M.; Balasubramaniam, Y.; Wachter, G.; Stursa, J.; Dobes, K.; Aumayr, F.; Trupke, M.; Gali, A.; Nesladek, M.
2017-01-01
The core issue for the implementation of NV center qubit technology is a sensitive readout of the NV spin state. We present here a detailed theoretical and experimental study of NV center photoionization processes, used as a basis for the design of a dual-beam photoelectric method for the detection of NV magnetic resonances (PDMR). This scheme, based on NV one-photon ionization, is significantly more efficient than the previously reported single-beam excitation scheme. We demonstrate this technique on small ensembles of ˜10 shallow NVs implanted in electronic grade diamond (a relevant material for quantum technology), on which we achieve a cw magnetic resonance contrast of 9%—three times enhanced compared to previous work. The dual-beam PDMR scheme allows independent control of the photoionization rate and spin magnetic resonance contrast. Under a similar excitation, we obtain a significantly higher photocurrent, and thus an improved signal-to-noise ratio, compared to single-beam PDMR. Finally, this scheme is predicted to enhance magnetic resonance contrast in the case of samples with a high proportion of substitutional nitrogen defects, and could therefore enable the photoelectric readout of single NV spins.
Simple Numerical Modelling for Gasdynamic Design of Wave Rotors
NASA Astrophysics Data System (ADS)
Okamoto, Koji; Nagashima, Toshio
The precise estimation of pressure waves generated in the passages is a crucial factor in wave rotor design. However, it is difficult to estimate the pressure wave analytically, e.g. by the method of characteristics, because the mechanism of pressure-wave generation and propagation in the passages is extremely complicated as compared to that in a shock tube. In this study, a simple numerical modelling scheme was developed to facilitate the design procedure. This scheme considers the three dominant factors in the loss mechanism —gradual passage opening, wall friction and leakage— for simulating the pressure waves precisely. The numerical scheme itself is based on the one-dimensional Euler equations with appropriate source terms to reduce the calculation time. The modelling of these factors was verified by comparing the results with those of a two-dimensional numerical simulation, which were previously validated by the experimental data in our previous study. Regarding wave rotor miniaturization, the leakage flow effect, which involves the interaction between adjacent cells, was investigated extensively. A port configuration principle was also examined and analyzed in detail to verify the applicability of the present numerical modelling scheme to the wave rotor design.
Coherence and visibility for vectorial light.
Luis, Alfredo
2010-08-01
Two-path interference of transversal vectorial waves is embedded within a larger scheme: this is four-path interference between four scalar waves. This comprises previous approaches to coherence between vectorial waves and restores the equivalence between correlation-based coherence and visibility.
A generalized form of the Bernoulli Trial collision scheme in DSMC: Derivation and evaluation
NASA Astrophysics Data System (ADS)
Roohi, Ehsan; Stefanov, Stefan; Shoja-Sani, Ahmad; Ejraei, Hossein
2018-02-01
The impetus of this research is to present a generalized Bernoulli Trial collision scheme in the context of the direct simulation Monte Carlo (DSMC) method. Previously, a subsequent of several collision schemes have been put forward, which were mathematically based on the Kac stochastic model. These include Bernoulli Trial (BT), Ballot Box (BB), Simplified Bernoulli Trial (SBT) and Intelligent Simplified Bernoulli Trial (ISBT) schemes. The number of considered pairs for a possible collision in the above-mentioned schemes varies between N (l) (N (l) - 1) / 2 in BT, 1 in BB, and (N (l) - 1) in SBT or ISBT, where N (l) is the instantaneous number of particles in the lth cell. Here, we derive a generalized form of the Bernoulli Trial collision scheme (GBT) where the number of selected pairs is any desired value smaller than (N (l) - 1), i.e., Nsel < (N (l) - 1), keeping the same the collision frequency and accuracy of the solution as the original SBT and BT models. We derive two distinct formulas for the GBT scheme, where both formula recover BB and SBT limits if Nsel is set as 1 and N (l) - 1, respectively, and provide accurate solutions for a wide set of test cases. The present generalization further improves the computational efficiency of the BT-based collision models compared to the standard no time counter (NTC) and nearest neighbor (NN) collision models.
Chen, Huifang; Xie, Lei
2014-01-01
Self-healing group key distribution (SGKD) aims to deal with the key distribution problem over an unreliable wireless network. In this paper, we investigate the SGKD issue in resource-constrained wireless networks. We propose two improved SGKD schemes using the one-way hash chain (OHC) and the revocation polynomial (RP), the OHC&RP-SGKD schemes. In the proposed OHC&RP-SGKD schemes, by introducing the unique session identifier and binding the joining time with the capability of recovering previous session keys, the problem of the collusion attack between revoked users and new joined users in existing hash chain-based SGKD schemes is resolved. Moreover, novel methods for utilizing the one-way hash chain and constructing the personal secret, the revocation polynomial and the key updating broadcast packet are presented. Hence, the proposed OHC&RP-SGKD schemes eliminate the limitation of the maximum allowed number of revoked users on the maximum allowed number of sessions, increase the maximum allowed number of revoked/colluding users, and reduce the redundancy in the key updating broadcast packet. Performance analysis and simulation results show that the proposed OHC&RP-SGKD schemes are practical for resource-constrained wireless networks in bad environments, where a strong collusion attack resistance is required and many users could be revoked. PMID:25529204
AN ADVANCED LEAKAGE SCHEME FOR NEUTRINO TREATMENT IN ASTROPHYSICAL SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perego, A.; Cabezón, R. M.; Käppeli, R., E-mail: albino.perego@physik.tu-darmstadt.de
We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmannmore » transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.« less
A Laboratory Testbed for Embedded Fuzzy Control
ERIC Educational Resources Information Center
Srivastava, S.; Sukumar, V.; Bhasin, P. S.; Arun Kumar, D.
2011-01-01
This paper presents a novel scheme called "Laboratory Testbed for Embedded Fuzzy Control of a Real Time Nonlinear System." The idea is based upon the fact that project-based learning motivates students to learn actively and to use their engineering skills acquired in their previous years of study. It also fosters initiative and focuses…
Exploration of multiphoton entangled states by using weak nonlinearities
He, Ying-Qiu; Ding, Dong; Yan, Feng-Li; Gao, Ting
2016-01-01
We propose a fruitful scheme for exploring multiphoton entangled states based on linear optics and weak nonlinearities. Compared with the previous schemes the present method is more feasible because there are only small phase shifts instead of a series of related functions of photon numbers in the process of interaction with Kerr nonlinearities. In the absence of decoherence we analyze the error probabilities induced by homodyne measurement and show that the maximal error probability can be made small enough even when the number of photons is large. This implies that the present scheme is quite tractable and it is possible to produce entangled states involving a large number of photons. PMID:26751044
Nagy-Soper Subtraction: a Review
NASA Astrophysics Data System (ADS)
Robens, Tania
2013-07-01
In this review, we present a review on an alternative NLO subtraction scheme, based on the splitting kernels of an improved parton shower that promises to facilitate the inclusion of higher-order corrections into Monte Carlo event generators. We give expressions for the scheme for massless emitters, and point to work on the extension for massive cases. As an example, we show results for the C parameter of the process e+e-→3 jets at NLO which have recently been published as a verification of this scheme. We equally provide analytic expressions for integrated counterterms that have not been presented in previous work, and comment on the possibility of analytic approximations for the remaining numerical integrals.
A novel quantum group signature scheme without using entangled states
NASA Astrophysics Data System (ADS)
Xu, Guang-Bao; Zhang, Ke-Jia
2015-07-01
In this paper, we propose a novel quantum group signature scheme. It can make the signer sign a message on behalf of the group without the help of group manager (the arbitrator), which is different from the previous schemes. In addition, a signature can be verified again when its signer disavows she has ever generated it. We analyze the validity and the security of the proposed signature scheme. Moreover, we discuss the advantages and the disadvantages of the new scheme and the existing ones. The results show that our scheme satisfies all the characteristics of a group signature and has more advantages than the previous ones. Like its classic counterpart, our scheme can be used in many application scenarios, such as e-government and e-business.
Rizvi, Sanam Shahla; Chung, Tae-Sun
2010-01-01
Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.
Deterministic Joint Remote Preparation of an Arbitrary Sevenqubit Cluster-type State
NASA Astrophysics Data System (ADS)
Ding, MengXiao; Jiang, Min
2017-06-01
In this paper, we propose a scheme for joint remotely preparing an arbitrary seven-qubit cluster-type state by using several GHZ entangled states as the quantum channel. The coefficients of the prepared states can be not only real, but also complex. Firstly, Alice performs a three-qubit projective measurement according to the amplitude coefficients of the target state, and then Bob carries out another three-qubit projective measurement based on its phase coefficients. Next, one three-qubit state containing all information of the target state is prepared with suitable operation. Finally, the target seven-qubit cluster-type state can be prepared by introducing four auxiliary qubits and performing appropriate local unitary operations based on the prepared three-qubit state in a deterministic way. The receiver's all recovery operations are summarized into a concise formula. Furthermore, it's worth noting that our scheme is more novel and feasible with the present technologies than most other previous schemes.
Compiler-assisted multiple instruction rollback recovery using a read buffer
NASA Technical Reports Server (NTRS)
Alewine, N. J.; Chen, S.-K.; Fuchs, W. K.; Hwu, W.-M.
1993-01-01
Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper focuses on compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations indicate improved efficiency over previous hardware-based and compiler-based schemes.
Don’t make cache too complex: A simple probability-based cache management scheme for SSDs
Cho, Sangyeun; Choi, Jongmoo
2017-01-01
Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme. PMID:28358897
Don't make cache too complex: A simple probability-based cache management scheme for SSDs.
Baek, Seungjae; Cho, Sangyeun; Choi, Jongmoo
2017-01-01
Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.
Digital watermarking for color images in hue-saturation-value color space
NASA Astrophysics Data System (ADS)
Tachaphetpiboon, Suwat; Thongkor, Kharittha; Amornraksa, Thumrongrat; Delp, Edward J.
2014-05-01
This paper proposes a new watermarking scheme for color images, in which all pixels of the image are used for embedding watermark bits in order to achieve the highest amount of embedding. For watermark embedding, the S component in the hue-saturation-value (HSV) color space is used to carry the watermark bits, while the V component is used in accordance with a human visual system model to determine the proper watermark strength. In the proposed scheme, the number of watermark bits equals the number of pixels in the host image. Watermark extraction is accomplished blindly based on the use of a 3×3 spatial domain Wiener filter. The efficiency of our proposed image watermarking scheme depends mainly on the accuracy of the estimate of the original S component. The experimental results show that the performance of the proposed scheme, under no attacks and against various types of attacks, was superior to the previous existing watermarking schemes.
An Automatic Detection System of Lung Nodule Based on Multi-Group Patch-Based Deep Learning Network.
Jiang, Hongyang; Ma, He; Qian, Wei; Gao, Mengdi; Li, Yan
2017-07-14
High-efficiency lung nodule detection dramatically contributes to the risk assessment of lung cancer. It is a significant and challenging task to quickly locate the exact positions of lung nodules. Extensive work has been done by researchers around this domain for approximately two decades. However, previous computer aided detection (CADe) schemes are mostly intricate and time-consuming since they may require more image processing modules, such as the computed tomography (CT) image transformation, the lung nodule segmentation and the feature extraction, to construct a whole CADe system. It is difficult for those schemes to process and analyze enormous data when the medical images continue to increase. Besides, some state of the art deep learning schemes may be strict in the standard of database. This study proposes an effective lung nodule detection scheme based on multi-group patches cut out from the lung images, which are enhanced by the Frangi filter. Through combining two groups of images, a four-channel convolution neural networks (CNN) model is designed to learn the knowledge of radiologists for detecting nodules of four levels. This CADe scheme can acquire the sensitivity of 80.06% with 4.7 false positives per scan and the sensitivity of 94% with 15.1 false positives per scan. The results demonstrate that the multi-group patch-based learning system is efficient to improve the performance of lung nodule detection and greatly reduce the false positives under a huge amount of image data.
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.
van Elburg, Ronald A J; van Ooyen, Arjen
2009-07-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.
A special protection scheme utilizing trajectory sensitivity analysis in power transmission
NASA Astrophysics Data System (ADS)
Suriyamongkol, Dan
In recent years, new measurement techniques have provided opportunities to improve the North American Power System observability, control and protection. This dissertation discusses the formulation and design of a special protection scheme based on a novel utilization of trajectory sensitivity techniques with inputs consisting of system state variables and parameters. Trajectory sensitivity analysis (TSA) has been used in previous publications as a method for power system security and stability assessment, and the mathematical formulation of TSA lends itself well to some of the time domain power system simulation techniques. Existing special protection schemes often have limited sets of goals and control actions. The proposed scheme aims to maintain stability while using as many control actions as possible. The approach here will use the TSA in a novel way by using the sensitivities of system state variables with respect to state parameter variations to determine the state parameter controls required to achieve the desired state variable movements. The initial application will operate based on the assumption that the modeled power system has full system observability, and practical considerations will be discussed.
Hybrid Upwinding for Two-Phase Flow in Heterogeneous Porous Media with Buoyancy and Capillarity
NASA Astrophysics Data System (ADS)
Hamon, F. P.; Mallison, B.; Tchelepi, H.
2016-12-01
In subsurface flow simulation, efficient discretization schemes for the partial differential equations governing multiphase flow and transport are critical. For highly heterogeneous porous media, the temporal discretization of choice is often the unconditionally stable fully implicit (backward-Euler) method. In this scheme, the simultaneous update of all the degrees of freedom requires solving large algebraic nonlinear systems at each time step using Newton's method. This is computationally expensive, especially in the presence of strong capillary effects driven by abrupt changes in porosity and permeability between different rock types. Therefore, discretization schemes that reduce the simulation cost by improving the nonlinear convergence rate are highly desirable. To speed up nonlinear convergence, we present an efficient fully implicit finite-volume scheme for immiscible two-phase flow in the presence of strong capillary forces. In this scheme, the discrete viscous, buoyancy, and capillary spatial terms are evaluated separately based on physical considerations. We build on previous work on Implicit Hybrid Upwinding (IHU) by using the upstream saturations with respect to the total velocity to compute the relative permeabilities in the viscous term, and by determining the directionality of the buoyancy term based on the phase density differences. The capillary numerical flux is decomposed into a rock- and geometry-dependent transmissibility factor, a nonlinear capillary diffusion coefficient, and an approximation of the saturation gradient. Combining the viscous, buoyancy, and capillary terms, we obtain a numerical flux that is consistent, bounded, differentiable, and monotone for homogeneous one-dimensional flow. The proposed scheme also accounts for spatially discontinuous capillary pressure functions. Specifically, at the interface between two rock types, the numerical scheme accurately honors the entry pressure condition by solving a local nonlinear problem to compute the numerical flux. Heterogeneous numerical tests demonstrate that this extended IHU scheme is non-oscillatory and convergent upon refinement. They also illustrate the superior accuracy and nonlinear convergence rate of the IHU scheme compared with the standard phase-based upstream weighting approach.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
A conjugate gradient method for solving the non-LTE line radiation transfer problem
NASA Astrophysics Data System (ADS)
Paletou, F.; Anterrieu, E.
2009-12-01
This study concerns the fast and accurate solution of the line radiation transfer problem, under non-LTE conditions. We propose and evaluate an alternative iterative scheme to the classical ALI-Jacobi method, and to the more recently proposed Gauss-Seidel and successive over-relaxation (GS/SOR) schemes. Our study is indeed based on applying a preconditioned bi-conjugate gradient method (BiCG-P). Standard tests, in 1D plane parallel geometry and in the frame of the two-level atom model with monochromatic scattering are discussed. Rates of convergence between the previously mentioned iterative schemes are compared, as are their respective timing properties. The smoothing capability of the BiCG-P method is also demonstrated.
NASA Astrophysics Data System (ADS)
Chao, I.-Fen; Zhang, Tsung-Min
2015-06-01
Long-reach passive optical networks (LR-PONs) have been considered to be promising solutions for future access networks. In this paper, we propose a distributed medium access control (MAC) scheme over an advantageous LR-PON network architecture that reroutes the control information from and back to all ONUs through an (N + 1) × (N + 1) star coupler (SC) deployed near the ONUs, thereby overwhelming the extremely long propagation delay problem in LR-PONs. In the network, the control slot is designed to contain all bandwidth requirements of all ONUs and is in-band time-division-multiplexed with a number of data slots within a cycle. In the proposed MAC scheme, a novel profit-weight-based dynamic bandwidth allocation (P-DBA) scheme is presented. The algorithm is designed to efficiently and fairly distribute the amount of excess bandwidth based on a profit value derived from the excess bandwidth usage of each ONU, which resolves the problems of previously reported DBA schemes that are either unfair or inefficient. The simulation results show that the proposed decentralized algorithms exhibit a nearly three-order-of-magnitude improvement in delay performance compared to the centralized algorithms over LR-PONs. Moreover, the newly proposed P-DBA scheme guarantees low delay performance and fairness even when under attack by the malevolent ONU irrespective of traffic loads and burstiness.
Yin, Jun; Yang, Yuwang; Wang, Lei
2016-04-01
Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.
ERIC Educational Resources Information Center
Wilkins, Jesse L. M.; Norton, Anderson; Boyce, Steven J.
2013-01-01
Previous research has documented schemes and operations that undergird students' understanding of fractions. This prior research was based, in large part, on small-group teaching experiments. However, written assessments are needed in order for teachers and researchers to assess students' ways of operating on a whole-class scale. In this study,…
NASA Astrophysics Data System (ADS)
Ajami, H.; Sharma, A.
2016-12-01
A computationally efficient, semi-distributed hydrologic modeling framework is developed to simulate water balance at a catchment scale. The Soil Moisture and Runoff simulation Toolkit (SMART) is based upon the delineation of contiguous and topologically connected Hydrologic Response Units (HRUs). In SMART, HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are distributed cross sections or equivalent cross sections (ECS) delineated in first order sub-basins. ECSs are formulated by aggregating topographic and physiographic properties of the part or entire first order sub-basins to further reduce computational time in SMART. Previous investigations using SMART have shown that temporal dynamics of soil moisture are well captured at a HRU level using the ECS delineation approach. However, spatial variability of soil moisture within a given HRU is ignored. Here, we examined a number of disaggregation schemes for soil moisture distribution in each HRU. The disaggregation schemes are either based on topographic based indices or a covariance matrix obtained from distributed soil moisture simulations. To assess the performance of the disaggregation schemes, soil moisture simulations from an integrated land surface-groundwater model, ParFlow.CLM in Baldry sub-catchment, Australia are used. ParFlow is a variably saturated sub-surface flow model that is coupled to the Common Land Model (CLM). Our results illustrate that the statistical disaggregation scheme performs better than the methods based on topographic data in approximating soil moisture distribution at a 60m scale. Moreover, the statistical disaggregation scheme maintains temporal correlation of simulated daily soil moisture while preserves the mean sub-basin soil moisture. Future work is focused on assessing the performance of this scheme in catchments with various topographic and climate settings.
Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R.; Afshar, Baharak; Underwood, Anthony; Harrison, Timothy G.
2016-01-01
Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current “gold standard” typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila. However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard “typing panel,” previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. PMID:27280420
David, Sophia; Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R; Afshar, Baharak; Underwood, Anthony; Fry, Norman K; Parkhill, Julian; Harrison, Timothy G
2016-08-01
Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current "gold standard" typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard "typing panel," previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. Copyright © 2016 David et al.
High-order conservative finite difference GLM-MHD schemes for cell-centered MHD
NASA Astrophysics Data System (ADS)
Mignone, Andrea; Tzeferacos, Petros; Bodo, Gianluigi
2010-08-01
We present and compare third- as well as fifth-order accurate finite difference schemes for the numerical solution of the compressible ideal MHD equations in multiple spatial dimensions. The selected methods lean on four different reconstruction techniques based on recently improved versions of the weighted essentially non-oscillatory (WENO) schemes, monotonicity preserving (MP) schemes as well as slope-limited polynomial reconstruction. The proposed numerical methods are highly accurate in smooth regions of the flow, avoid loss of accuracy in proximity of smooth extrema and provide sharp non-oscillatory transitions at discontinuities. We suggest a numerical formulation based on a cell-centered approach where all of the primary flow variables are discretized at the zone center. The divergence-free condition is enforced by augmenting the MHD equations with a generalized Lagrange multiplier yielding a mixed hyperbolic/parabolic correction, as in Dedner et al. [J. Comput. Phys. 175 (2002) 645-673]. The resulting family of schemes is robust, cost-effective and straightforward to implement. Compared to previous existing approaches, it completely avoids the CPU intensive workload associated with an elliptic divergence cleaning step and the additional complexities required by staggered mesh algorithms. Extensive numerical testing demonstrate the robustness and reliability of the proposed framework for computations involving both smooth and discontinuous features.
Yang, Hui; Zhang, Jie; Ji, Yuefeng; Tan, Yuanlong; Lin, Yi; Han, Jianrui; Lee, Young
2015-09-07
Data center interconnection with elastic optical network is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented cross stratum optimization of optical network and application stratums resources that allows to accommodate data center services. In view of this, this study extends the data center resources to user side to enhance the end-to-end quality of service. We propose a novel data center service localization (DCSL) architecture based on virtual resource migration in software defined elastic data center optical network. A migration evaluation scheme (MES) is introduced for DCSL based on the proposed architecture. The DCSL can enhance the responsiveness to the dynamic end-to-end data center demands, and effectively reduce the blocking probability to globally optimize optical network and application resources. The overall feasibility and efficiency of the proposed architecture are experimentally verified on the control plane of our OpenFlow-based enhanced SDN testbed. The performance of MES scheme under heavy traffic load scenario is also quantitatively evaluated based on DCSL architecture in terms of path blocking probability, provisioning latency and resource utilization, compared with other provisioning scheme.
Visual privacy by context: proposal and evaluation of a level-based visualisation scheme.
Padilla-López, José Ramón; Chaaraoui, Alexandros Andre; Gu, Feng; Flórez-Revuelta, Francisco
2015-06-04
Privacy in image and video data has become an important subject since cameras are being installed in an increasing number of public and private spaces. Specifically, in assisted living, intelligent monitoring based on computer vision can allow one to provide risk detection and support services that increase people's autonomy at home. In the present work, a level-based visualisation scheme is proposed to provide visual privacy when human intervention is necessary, such as at telerehabilitation and safety assessment applications. Visualisation levels are dynamically selected based on the previously modelled context. In this way, different levels of protection can be provided, maintaining the necessary intelligibility required for the applications. Furthermore, a case study of a living room, where a top-view camera is installed, is presented. Finally, the performed survey-based evaluation indicates the degree of protection provided by the different visualisation models, as well as the personal privacy preferences and valuations of the users.
A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-07-03
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.
NASA Astrophysics Data System (ADS)
Masalmah, Yahya M.; Vélez-Reyes, Miguel
2007-04-01
The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.
Saleh, Mohammed A; Abdul Manaf, Azizah
2015-01-01
The growth of web technology has brought convenience to our life, since it has become the most important communication channel. However, now this merit is threatened by complicated network-based attacks, such as denial of service (DoS) and distributed denial of service (DDoS) attacks. Despite many researchers' efforts, no optimal solution that addresses all sorts of HTTP DoS/DDoS attacks is on offer. Therefore, this research aims to fix this gap by designing an alternative solution called a flexible, collaborative, multilayer, DDoS prevention framework (FCMDPF). The innovative design of the FCMDPF framework handles all aspects of HTTP-based DoS/DDoS attacks through the following three subsequent framework's schemes (layers). Firstly, an outer blocking (OB) scheme blocks attacking IP source if it is listed on the black list table. Secondly, the service traceback oriented architecture (STBOA) scheme is to validate whether the incoming request is launched by a human or by an automated tool. Then, it traces back the true attacking IP source. Thirdly, the flexible advanced entropy based (FAEB) scheme is to eliminate high rate DDoS (HR-DDoS) and flash crowd (FC) attacks. Compared to the previous researches, our framework's design provides an efficient protection for web applications against all sorts of DoS/DDoS attacks.
Saleh, Mohammed A.; Abdul Manaf, Azizah
2015-01-01
The growth of web technology has brought convenience to our life, since it has become the most important communication channel. However, now this merit is threatened by complicated network-based attacks, such as denial of service (DoS) and distributed denial of service (DDoS) attacks. Despite many researchers' efforts, no optimal solution that addresses all sorts of HTTP DoS/DDoS attacks is on offer. Therefore, this research aims to fix this gap by designing an alternative solution called a flexible, collaborative, multilayer, DDoS prevention framework (FCMDPF). The innovative design of the FCMDPF framework handles all aspects of HTTP-based DoS/DDoS attacks through the following three subsequent framework's schemes (layers). Firstly, an outer blocking (OB) scheme blocks attacking IP source if it is listed on the black list table. Secondly, the service traceback oriented architecture (STBOA) scheme is to validate whether the incoming request is launched by a human or by an automated tool. Then, it traces back the true attacking IP source. Thirdly, the flexible advanced entropy based (FAEB) scheme is to eliminate high rate DDoS (HR-DDoS) and flash crowd (FC) attacks. Compared to the previous researches, our framework's design provides an efficient protection for web applications against all sorts of DoS/DDoS attacks. PMID:26065015
A New Improving Quantum Secret Sharing Scheme
NASA Astrophysics Data System (ADS)
Xu, Ting-Ting; Li, Zhi-Hui; Bai, Chen-Ming; Ma, Min
2017-04-01
An improving quantum secret sharing scheme (IQSS scheme) was introduced by Nascimento et al. (Phys. Rev. A 64, 042311 (2001)), which was analyzed by the improved quantum access structure. In this paper, we propose a new improving quantum secret sharing scheme, and more quantum access structures can be realized by this scheme than the previous one. For example, we prove that any threshold and hypercycle quantum access structures can be realized by the new scheme.
Book Selection, Collection Development, and Bounded Rationality.
ERIC Educational Resources Information Center
Schwartz, Charles A.
1989-01-01
Reviews previously proposed schemes of classical rationality in book selection, describes new approaches to rational choice behavior, and presents a model of book selection based on bounded rationality in a garbage can decision process. The role of tacit knowledge and symbolic content in the selection process are also discussed. (102 references)…
Improving the representation of mixed-phase cloud microphysics in the ICON-LEM
NASA Astrophysics Data System (ADS)
Tonttila, Juha; Hoose, Corinna; Milbrandt, Jason; Morrison, Hugh
2017-04-01
The representation of ice-phase cloud microphysics in ICON-LEM (the Large-Eddy Model configuration of the ICOsahedral Nonhydrostatic model) is improved by implementing the recently published Predicted Particle Properties (P3) scheme into the model. In the typical two-moment microphysical schemes, such as that previously used in ICON-LEM, ice-phase particles must be partitioned into several prescribed categories. It is inherently difficult to distinguish between categories such as graupel and hail based on just the particle size, yet this partitioning may significantly affect the simulation of convective clouds. The P3 scheme avoids the problems associated with predefined ice-phase categories that are inherent in traditional microphysics schemes by introducing the concept of "free" ice-phase categories, whereby the prognostic variables enable the prediction of a wide range of smoothly varying physical properties and hence particle types. To our knowledge, this is the first application of the P3 scheme in a large-eddy model with horizontal grid spacings on the order of 100 m. We will present results from ICON-LEM simulations with the new P3 scheme comprising idealized stratiform and convective cloud cases. We will also present real-case limited-area simulations focusing on the HOPE (HD(CP)2 Observational Prototype Experiment) intensive observation campaign. The results are compared with a matching set of simulations employing the two-moment scheme and the performance of the model is also evaluated against observations in the context of the HOPE simulations, comprising data from ground based remote sensing instruments.
Enhancing the Reliability of Head Nodes in Underwater Sensor Networks
Min, Hong; Cho, Yookun; Heo, Junyoung
2012-01-01
Underwater environments are quite different from terrestrial environments in terms of the communication media and operating conditions associated with those environments. In underwater sensor networks, the probability of node failure is high because sensor nodes are deployed in harsher environments than ground-based networks. The sensor nodes are surrounded by salt water and moved around by waves and currents. Many studies have focused on underwater communication environments in an effort to improve the data transmission throughput. In this paper, we present a checkpointing scheme for the head nodes to quickly recover from a head node failure. Experimental results show that the proposed scheme enhances the reliability of the networks and makes them more efficient in terms of energy consumption and the recovery latency compared to the previous scheme without checkpointing. PMID:22438707
NASA Technical Reports Server (NTRS)
Shen, C. N.; YERAZUNIS
1979-01-01
The feasibility of using range/pointing angle data such as might be obtained by a laser rangefinder for the purpose of terrain evaluation in the 10-40 meter range on which to base the guidance of an autonomous rover was investigated. The decision procedure of the rapid estimation scheme for the detection of discrete obstacles has been modified to reinforce the detection ability. With the introduction of the logarithmic scanning scheme and obstacle identification scheme, previously developed algorithms are combined to demonstrate the overall performance of the intergrated route designation system using laser rangefinder. In an attempt to cover a greater range, 30 m to 100 mm, the problem estimating gradients in the presence of positioning angle noise at middle range is investigated.
Sixth- and eighth-order Hermite integrator for N-body simulations
NASA Astrophysics Data System (ADS)
Nitadori, Keigo; Makino, Junichiro
2008-10-01
We present sixth- and eighth-order Hermite integrators for astrophysical N-body simulations, which use the derivatives of accelerations up to second-order ( snap) and third-order ( crackle). These schemes do not require previous values for the corrector, and require only one previous value to construct the predictor. Thus, they are fairly easy to implement. The additional cost of the calculation of the higher-order derivatives is not very high. Even for the eighth-order scheme, the number of floating-point operations for force calculation is only about two times larger than that for traditional fourth-order Hermite scheme. The sixth-order scheme is better than the traditional fourth-order scheme for most cases. When the required accuracy is very high, the eighth-order one is the best. These high-order schemes have several practical advantages. For example, they allow a larger number of particles to be integrated in parallel than the fourth-order scheme does, resulting in higher execution efficiency in both general-purpose parallel computers and GRAPE systems.
Distributed Efficient Similarity Search Mechanism in Wireless Sensor Networks
Ahmed, Khandakar; Gregory, Mark A.
2015-01-01
The Wireless Sensor Network similarity search problem has received considerable research attention due to sensor hardware imprecision and environmental parameter variations. Most of the state-of-the-art distributed data centric storage (DCS) schemes lack optimization for similarity queries of events. In this paper, a DCS scheme with metric based similarity searching (DCSMSS) is proposed. DCSMSS takes motivation from vector distance index, called iDistance, in order to transform the issue of similarity searching into the problem of an interval search in one dimension. In addition, a sector based distance routing algorithm is used to efficiently route messages. Extensive simulation results reveal that DCSMSS is highly efficient and significantly outperforms previous approaches in processing similarity search queries. PMID:25751081
NASA Astrophysics Data System (ADS)
Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen
2016-09-01
In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.
7 CFR 701.36 - Schemes and devices and claims avoidances.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Schemes and devices and claims avoidances. 701.36... RELATED PROGRAMS PREVIOUSLY ADMINISTERED UNDER THIS PART § 701.36 Schemes and devices and claims..., any scheme or device designed to evade the maximum cost-share limitation that applies to the ECP or to...
NASA Astrophysics Data System (ADS)
Hu, Peigang; Jin, Yaohui; Zhang, Chunlei; He, Hao; Hu, WeiSheng
2005-02-01
The increasing switching capacity brings the optical node with considerable complexity. Due to the limitation in cost and technology, an optical node is often designed with partial switching capability and partial resource sharing. It means that the node is of blocking to some extent, for example multi-granularity switching node, which in fact is a structure using pass wavelength to reduce the dimension of OXC, and partial sharing wavelength converter (WC) OXC. It is conceivable that these blocking nodes will have great effects on the problem of routing and wavelength assignment. Some previous works studied the blocking case, partial WC OXC, using complicated wavelength assignment algorithm. But the complexities of these schemes decide them to be not in practice in real networks. In this paper, we propose a new scheme based on the node blocking state advertisement to reduce the retry or rerouting probability and improve the efficiency of routing in the networks with blocking nodes. In the scheme, node blocking state are advertised to the other nodes in networks, which will be used for subsequent route calculation to find a path with lowest blocking probability. The performance of the scheme is evaluated using discrete event model in 14-node NSFNET, all the nodes of which employ a kind of partial sharing WC OXC structure. In the simulation, a simple First-Fit wavelength assignment algorithm is used. The simulation results demonstrate that the new scheme considerably reduces the retry or rerouting probability in routing process.
Ibeas, Asier; de la Sen, Manuel
2006-10-01
The problem of controlling a tandem of robotic manipulators composing a teleoperation system with force reflection is addressed in this paper. The final objective of this paper is twofold: 1) to design a robust control law capable of ensuring closed-loop stability for robots with uncertainties and 2) to use the so-obtained control law to improve the tracking of each robot to its corresponding reference model in comparison with previously existing controllers when the slave is interacting with the obstacle. In this way, a multiestimation-based adaptive controller is proposed. Thus, the master robot is able to follow more accurately the constrained motion defined by the slave when interacting with an obstacle than when a single-estimation-based controller is used, improving the transparency property of the teleoperation scheme. The closed-loop stability is guaranteed if a minimum residence time, which might be updated online when unknown, between different controller parameterizations is respected. Furthermore, the analysis of the teleoperation and stability capabilities of the overall scheme is carried out. Finally, some simulation examples showing the working of the multiestimation scheme complete this paper.
NASA Astrophysics Data System (ADS)
Kim, Sungtae; Lee, Soogab; Kim, Kyu Hong
2008-04-01
A new numerical method toward accurate and efficient aeroacoustic computations of multi-dimensional compressible flows has been developed. The core idea of the developed scheme is to unite the advantages of the wavenumber-extended optimized scheme and M-AUSMPW+/MLP schemes by predicting a physical distribution of flow variables more accurately in multi-space dimensions. The wavenumber-extended optimization procedure for the finite volume approach based on the conservative requirement is newly proposed for accuracy enhancement, which is required to capture the acoustic portion of the solution in the smooth region. Furthermore, the new distinguishing mechanism which is based on the Gibbs phenomenon in discontinuity, between continuous and discontinuous regions is introduced to eliminate the excessive numerical dissipation in the continuous region by the restricted application of MLP according to the decision of the distinguishing function. To investigate the effectiveness of the developed method, a sequence of benchmark simulations such as spherical wave propagation, nonlinear wave propagation, shock tube problem and vortex preservation test problem are executed. Also, throughout more realistic shock-vortex interaction and muzzle blast flow problems, the utility of the new method for aeroacoustic applications is verified by comparing with the previous numerical or experimental results.
NASA Astrophysics Data System (ADS)
Wali, Mohebullah; Nakamura, Yukinori; Wakui, Shinji
In this study, a positioning stage is considered, which is actuated by four pneumatic cylinders and vertically supported by four coil-type spring isolators. Previously, we realized the base plate jerk feedback (BPJFB) to be analogues to a Master-Slave system which can synchronize the motion of the stage as a Slave to the motion of the base plate as a Master. However, in the case of real positioning, the stage had slightly self oscillation with higher frequency due to the higher gains set to the outer feedback loop controller besides its oscillation due to the natural vibration of the base plate. The self oscillation of stage was misunderstood to be the natural vibration of base plate due to the reaction force. However, according to the experimental results, the BPJFB scheme was able to control both of the mentioned vibrations. Suppression of the self vibration of stage is an interesting phenomenon, which should be experimentally investigated. Therefore, the current study focuses on the suppression of the self vibration of stage by using the BPJFB scheme. The experimental results show that besides operating as a Master-Slave synchronizing system, the PBJFB scheme is able to increase the damping ratio and stiffness of stage against its self vibration. This newly recognized phenomenon contributes to further increase the proportional gain of the outer feedback loop controller. As a result, the positioning speed and stability can be improved.
Image watermarking capacity analysis based on Hopfield neural network
NASA Astrophysics Data System (ADS)
Zhang, Fan; Zhang, Hongbin
2004-11-01
In watermarking schemes, watermarking can be viewed as a form of communication problems. Almost all of previous works on image watermarking capacity are based on information theory, using Shannon formula to calculate the capacity of watermarking. In this paper, we present a blind watermarking algorithm using Hopfield neural network, and analyze watermarking capacity based on neural network. In our watermarking algorithm, watermarking capacity is decided by attraction basin of associative memory.
Finite difference schemes for long-time integration
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1993-01-01
Finite difference schemes for the evaluation of first and second derivatives are presented. These second order compact schemes were designed for long-time integration of evolution equations by solving a quadratic constrained minimization problem. The quadratic cost function measures the global truncation error while taking into account the initial data. The resulting schemes are applicable for integration times fourfold, or more, longer than similar previously studied schemes. A similar approach was used to obtain improved integration schemes.
Compiler-Assisted Multiple Instruction Rollback Recovery Using a Read Buffer. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Alewine, Neal Jon
1993-01-01
Multiple instruction rollback (MIR) is a technique to provide rapid recovery from transient processor failures and was implemented in hardware by researchers and slow in mainframe computers. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs were also developed which remove rollback data hazards directly with data flow manipulations, thus eliminating the need for most data redundancy hardware. Compiler-assisted techniques to achieve multiple instruction rollback recovery are addressed. It is observed that data some hazards resulting from instruction rollback can be resolved more efficiently by providing hardware redundancy while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations were conducted which indicate improved efficiency over previous hardware-based and compiler-based schemes. Various enhancements to the compiler transformations and to the data redundancy hardware developed for the compiler-assisted MIR scheme are described and evaluated. The final topic deals with the application of compiler-assisted MIR techniques to aid in exception repair and branch repair in a speculative execution architecture.
Wörz, Stefan; Rohr, Karl
2006-01-01
We introduce an elastic registration approach which is based on a physical deformation model and uses Gaussian elastic body splines (GEBS). We formulate an extended energy functional related to the Navier equation under Gaussian forces which also includes landmark localization uncertainties. These uncertainties are characterized by weight matrices representing anisotropic errors. Since the approach is based on a physical deformation model, cross-effects in elastic deformations can be taken into account. Moreover, we have a free parameter to control the locality of the transformation for improved registration of local geometric image differences. We demonstrate the applicability of our scheme based on 3D CT images from the Truth Cube experiment, 2D MR images of the brain, as well as 2D gel electrophoresis images. It turns out that the new scheme achieves more accurate results compared to previous approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindskog, M., E-mail: martin.lindskog@teorfys.lu.se; Wacker, A.; Wolf, J. M.
2014-09-08
We study the operation of an 8.5 μm quantum cascade laser based on GaInAs/AlInAs lattice matched to InP using three different simulation models based on density matrix (DM) and non-equilibrium Green's function (NEGF) formulations. The latter advanced scheme serves as a validation for the simpler DM schemes and, at the same time, provides additional insight, such as the temperatures of the sub-band carrier distributions. We find that for the particular quantum cascade laser studied here, the behavior is well described by simple quantum mechanical estimates based on Fermi's golden rule. As a consequence, the DM model, which includes second order currents,more » agrees well with the NEGF results. Both these simulations are in accordance with previously reported data and a second regrown device.« less
Highly sensitive atomic based MW interferometry.
Shylla, Dangka; Nyakang'o, Elijah Ogaro; Pandey, Kanhaiya
2018-06-06
We theoretically study a scheme to develop an atomic based micro-wave (MW) interferometry using the Rydberg states in Rb. Unlike the traditional MW interferometry, this scheme is not based upon the electrical circuits, hence the sensitivity of the phase and the amplitude/strength of the MW field is not limited by the Nyquist thermal noise. Further, this system has great advantage due to its much higher frequency range in comparision to the electrical circuit, ranging from radio frequency (RF), MW to terahertz regime. In addition, this is two orders of magnitude more sensitive to field strength as compared to the prior demonstrations on the MW electrometry using the Rydberg atomic states. Further, previously studied atomic systems are only sensitive to the field strength but not to the phase and hence this scheme provides a great opportunity to characterize the MW completely including the propagation direction and the wavefront. The atomic based MW interferometry is based upon a six-level loopy ladder system involving the Rydberg states in which two sub-systems interfere constructively or destructively depending upon the phase between the MW electric fields closing the loop. This work opens up a new field i.e. atomic based MW interferometry replacing the conventional electrical circuit in much superior fashion.
Weighted bi-prediction for light field image coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2017-09-01
Light field imaging based on a single-tier camera equipped with a microlens array - also known as integral, holoscopic, and plenoptic imaging - has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.
Diode-Laser Pumped Far-Infrared Local Oscillator Based on Semiconductor Quantum Wells
NASA Technical Reports Server (NTRS)
Kolokolov, K.; Li, J.; Ning, C. Z.; Larrabee, D. C.; Tang, J.; Khodaparast, G.; Kono, J.; Sasa, S.; Inoue, M.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
The contents include: 1) Tetrahertz Field: A Technology Gap; 2) Existing THZ Sources and Shortcomings; 3) Applications of A THZ Laser; 4) Previous Optical Pumped LW Generations; 5) Optically Pumped Sb based Intersubband Generation Whys; 6) InGaAs/InP/AlAsSb QWs; 7) Raman Enhanced Optical Gain; 8) Pump Intensity Dependence of THZ Gain; 9) Pump-Probe Interaction Induced Raman Shift; 10) THZ Laser Gain in InGaAs/InP/AlAsSb QWs; 11) Diode-Laser Pumped Difference Frequency Generation (InGaAs/InP/AlAsSb QWs); 12) 6.1 Angstrom Semiconductor Quantum Wells; 13) InAs/GaSb/AlSb Nanostructures; 14) InAs/AlSb Double QWs: DFG Scheme; 15) Sb-Based Triple QWs: Laser Scheme; and 16) Exciton State Pumped THZ Generation. This paper is presented in viewgraph form.
Zhang, Xudong
2002-10-01
This work describes a new approach that allows an angle-domain human movement model to generate, via forward kinematics, Cartesian-space human movement representation with otherwise inevitable end-point offset nullified but much of the kinematic authenticity retained. The approach incorporates a rectification procedure that determines the minimum postural angle change at the final frame to correct the end-point offset, and a deformation procedure that deforms the angle profile accordingly to preserve maximum original kinematic authenticity. Two alternative deformation schemes, named amplitude-proportional (AP) and time-proportional (TP) schemes, are proposed and formulated. As an illustration and empirical evaluation, the proposed approach, along with two deformation schemes, was applied to a set of target-directed right-hand reaching movements that had been previously measured and modeled. The evaluation showed that both deformation schemes nullified the final frame end-point offset and significantly reduced time-averaged position errors for the end-point as well as the most distal intermediate joint while causing essentially no change in the remaining joints. A comparison between the two schemes based on time-averaged joint and end-point position errors indicated that overall the TP scheme outperformed the AP scheme. In addition, no statistically significant difference in time-averaged angle error was identified between the raw prediction and either of the deformation schemes, nor between the two schemes themselves, suggesting minimal angle-domain distortion incurred by the deformation.
Environmental and ecological impacts of water supplement schemes in a heavily polluted estuary.
Su, Qiong; Qin, Huapeng; Fu, Guangtao
2014-02-15
Water supplement has been used to improve water quality in a heavily polluted river with small base flow. However, its adverse impacts particularly on nearby sensitive ecosystems have not been fully investigated in previous studies. In this paper, using the Shenzhen River estuary in China as a case study, the impacts of two potential water supplement schemes (reclaimed water scheme and seawater scheme) on water quality improvement and salinity alteration of the estuary are studied. The influences of salinity alteration on the dominant mangrove species (Aegiceras corniculatum, Kandelia candel, and Avicennia marina) are further evaluated by comparing the alteration with the historical salinity data and the optimum salinity range for mangrove growth. The results obtained indicate that the targets of water quality improvement can be achieved by implementing the water supplement schemes with roughly the same flow rates. The salinity under the reclaimed water scheme lies in the range of historical salinity variation, and its average value is close to the optimum salinity for mangrove growth. Under the seawater scheme, however, the salinity in the estuary exceeds the range of historical salinity variation and approaches to the upper bound of the survival salinity of the mangrove species which have a relatively low salt tolerance (e.g. A. corniculatum). Therefore, the seawater scheme has negative ecological consequences, while the reclaimed water scheme has less ecological impact and is recommended in this study. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, Tae-Wook; Park, Sang-Gyu; Choi, Byong-Deok
2011-03-01
The previous pixel-level digital-to-analog-conversion (DAC) scheme that implements a part of a DAC in a pixel circuit turned out to be very efficient for reducing the peripheral area of an integrated data driver fabricated with low-temperature polycrystalline silicon thin-film transistors (LTPS TFTs). However, how the pixel-level DAC can be compatible with the existing pixel circuits including compensation schemes of TFT variations and IR drops on supply rails, which is of primary importance for active matrix organic light emitting diodes (AMOLEDs) is an issue in this scheme, because LTPS TFTs suffer from random variations in their characteristics. In this paper, we show that the pixel-level DAC scheme can be successfully used with the previous compensation schemes by giving two examples of voltage- and current-programming pixels. The previous pixel-level DAC schemes require additional two TFTs and one capacitor, but for these newly proposed pixel circuits, the overhead is no more than two TFTs by utilizing the already existing capacitor. In addition, through a detailed analysis, it has been shown that the pixel-level DAC can be expanded to a 4-bit resolution, or be applied together with 1:2 demultiplexing driving for 6- to 8-in. diagonal XGA AMOLED display panels.
High-Order Central WENO Schemes for 1D Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan A. (Technical Monitor)
2002-01-01
In this paper we derive fully-discrete Central WENO (CWENO) schemes for approximating solutions of one dimensional Hamilton-Jacobi (HJ) equations, which combine our previous works. We introduce third and fifth-order accurate schemes, which are the first central schemes for the HJ equations of order higher than two. The core ingredient is the derivation of our schemes is a high-order CWENO reconstructions in space.
Zhang, Jun; Kong, Yingying; Ruan, Zhi; Huang, Jun; Song, Tiejun; Song, Jingjuan; Jiang, Yan; Yu, Yunsong; Xie, Xinyou
2014-01-01
The multilocus sequence typing (MLST) scheme of Ureaplasma based on four housekeeping genes (ftsH, rpL22, valS, and thrS) was described in our previous study; here we introduced an expanded MLST (eMLST) scheme with improved discriminatory power, which was developed by adding two putative virulence genes (ureG and mba-np1) to the original MLST scheme. To evaluate the discriminatory power of eMLST, a total of 14 reference strains of Ureaplasma serovars and 269 clinical strains (134 isolated from symptomatic patients and 135 obtained from asymptomatic persons) were investigated. Our study confirmed that all 14 serotype strains could successfully be differentiated into 14 eMLST STs (eSTs), while some of them could not even be differentiated by the MLST, and a total of 136 eSTs were identified among the clinical isolates we investigated. In addition, phylogenetic analysis indicated that two genetically significantly distant clusters (cluster I and II) were revealed and most clinical isolates were located in cluster I. These findings were in accordance with and further support for the concept of two well-known genetic lineages (Ureaplasma parvum and Ureaplasma urealyticum) in our previous study. Interestingly, although both clusters were associated with clinical manifestation, the sub-group 2 of cluster II had pronounced and adverse effect on patients and might be a potential risk factor for clinical outcomes. In conclusion, the eMLST scheme offers investigators a highly discriminative typing tool that is capable for precise epidemiological investigations and clinical relevance of Ureaplasma.
Ruan, Zhi; Huang, Jun; Song, Tiejun; Song, Jingjuan; Jiang, Yan; Yu, Yunsong; Xie, Xinyou
2014-01-01
The multilocus sequence typing (MLST) scheme of Ureaplasma based on four housekeeping genes (ftsH, rpL22, valS, and thrS) was described in our previous study; here we introduced an expanded MLST (eMLST) scheme with improved discriminatory power, which was developed by adding two putative virulence genes (ureG and mba-np1) to the original MLST scheme. To evaluate the discriminatory power of eMLST, a total of 14 reference strains of Ureaplasma serovars and 269 clinical strains (134 isolated from symptomatic patients and 135 obtained from asymptomatic persons) were investigated. Our study confirmed that all 14 serotype strains could successfully be differentiated into 14 eMLST STs (eSTs), while some of them could not even be differentiated by the MLST, and a total of 136 eSTs were identified among the clinical isolates we investigated. In addition, phylogenetic analysis indicated that two genetically significantly distant clusters (cluster I and II) were revealed and most clinical isolates were located in cluster I. These findings were in accordance with and further support for the concept of two well-known genetic lineages (Ureaplasma parvum and Ureaplasma urealyticum) in our previous study. Interestingly, although both clusters were associated with clinical manifestation, the sub-group 2 of cluster II had pronounced and adverse effect on patients and might be a potential risk factor for clinical outcomes. In conclusion, the eMLST scheme offers investigators a highly discriminative typing tool that is capable for precise epidemiological investigations and clinical relevance of Ureaplasma. PMID:25093900
Conspicuity assessment of selected propeller and tail rotor paint schemes.
DOT National Transportation Integrated Search
1978-08-01
An investigation was conducted to rank the conspicuity of three paint schemes for airplane propellers and two schemes tail rotor blades previously recommended by the U.S. military and British Civil Aviation Authority. Thirty volunteer subjects with n...
Radiation pressure driving of a dusty atmosphere
NASA Astrophysics Data System (ADS)
Tsang, Benny T.-H.; Milosavljević, Miloš
2015-10-01
Radiation pressure can be dynamically important in star-forming environments such as ultra-luminous infrared and submillimetre galaxies. Whether and how radiation drives turbulence and bulk outflows in star formation sites is still unclear. The uncertainty in part reflects the limitations of direct numerical schemes that are currently used to simulate radiation transfer and radiation-gas coupling. An idealized setup in which radiation is introduced at the base of a dusty atmosphere in a gravitational field has recently become the standard test for radiation-hydrodynamics methods in the context of star formation. To a series of treatments featuring the flux-limited diffusion approximation as well as a short-characteristics tracing and M1 closure for the variable Eddington tensor approximation, we here add another treatment that is based on the implicit Monte Carlo radiation transfer scheme. Consistent with all previous treatments, the atmosphere undergoes Rayleigh-Taylor instability and readjusts to a near-Eddington-limited state. We detect late-time net acceleration in which the turbulent velocity dispersion matches that reported previously with the short-characteristics-based radiation transport closure, the most accurate of the three preceding treatments. Our technical result demonstrates the importance of accurate radiation transfer in simulations of radiative feedback.
Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2012-01-01
The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.
Differential Privacy Preserving in Big Data Analytics for Connected Health.
Lin, Chi; Song, Zihao; Song, Houbing; Zhou, Yanhong; Wang, Yi; Wu, Guowei
2016-04-01
In Body Area Networks (BANs), big data collected by wearable sensors usually contain sensitive information, which is compulsory to be appropriately protected. Previous methods neglected privacy protection issue, leading to privacy exposure. In this paper, a differential privacy protection scheme for big data in body sensor network is developed. Compared with previous methods, this scheme will provide privacy protection with higher availability and reliability. We introduce the concept of dynamic noise thresholds, which makes our scheme more suitable to process big data. Experimental results demonstrate that, even when the attacker has full background knowledge, the proposed scheme can still provide enough interference to big sensitive data so as to preserve the privacy.
NASA Astrophysics Data System (ADS)
da Silva, Eduardo; Dos Santos, Aldri Luiz; Lima, Michele N.; Albini, Luiz Carlos Pessoa
Among the key management schemes for MANETs, the Self-Organized Public-Key Management System (PGP-Like) is the main chaining-based key management scheme. It is fully self-organized and does not require any certificate authority. Two kinds of misbehavior attacks are considered to be great threats to PGP-Like: lack of cooperation and impersonation attacks. This work quantifies the impact of such attacks on the PGP-Like. Simulation results show that PGP-Like was able to maintain its effectiveness when submitted to the lack of cooperation attack, contradicting previously theoretical results. It correctly works even in the presence of more than 60% of misbehaving nodes, although the convergence time is affected with only 20% of misbehaving nodes. On the other hand, PGP-Like is completely vulnerable to the impersonation attack. Its functionality is affected with just 5% of misbehaving nodes, confirming previously theoretical results.
Progressive retry for software error recovery in distributed systems
NASA Technical Reports Server (NTRS)
Wang, Yi-Min; Huang, Yennun; Fuchs, W. K.
1993-01-01
In this paper, we describe a method of execution retry for bypassing software errors based on checkpointing, rollback, message reordering and replaying. We demonstrate how rollback techniques, previously developed for transient hardware failure recovery, can also be used to recover from software faults by exploiting message reordering to bypass software errors. Our approach intentionally increases the degree of nondeterminism and the scope of rollback when a previous retry fails. Examples from our experience with telecommunications software systems illustrate the benefits of the scheme.
Advanced feedback control methods in EXTRAP T2R reversed field pinch
NASA Astrophysics Data System (ADS)
Yadikin, D.; Brunsell, P. R.; Paccagnella, R.
2006-07-01
Previous experiments in the EXTRAP T2R reversed field pinch device have shown the possibility of suppression of multiple resistive wall modes (RWM). A feedback system has been installed in EXTRAP T2R having 100% coverage of the toroidal surface by the active coil array. Predictions based on theory and the previous experimental results show that the number of active coils should be sufficient for independent stabilization of all unstable RWMs in the EXTRAP T2R. Experiments using different feedback schemes are performed, comparing the intelligent shell, the fake rotating shell, and the mode control with complex feedback gains. Stabilization of all unstable RWMs throughout the discharge duration of td≈10τw is seen using the intelligent shell feedback scheme. Mode rotation and the control of selected Fourier harmonics is obtained simultaneously using the mode control scheme with complex gains. Different sensor signals are studied. A feedback system with toroidal magnetic field sensors could have an advantage of lower feedback gain needed for the RWM suppression compared to the system with radial magnetic field sensors. In this study, RWM suppression is demonstrated, using also the toroidal field component as a sensor signal in the feedback system.
Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional
NASA Astrophysics Data System (ADS)
Song, Jong-Won; Hirao, Kimihiko
2015-07-01
We previously developed an efficient screened hybrid functional called Gaussian-Perdew-Burke-Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals. We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.
Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Jong-Won; Hirao, Kimihiko, E-mail: hirao@riken.jp
2015-07-14
We previously developed an efficient screened hybrid functional called Gaussian-Perdew–Burke–Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals.more » We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M. G.; Ackermann, M.; Adams, J.
Here we present the development and application of a generic analysis scheme for the measurement of neutrino spectra with the IceCube detector. This scheme is based on regularized unfolding, preceded by an event selection which uses a Minimum Redundancy Maximum Relevance algorithm to select the relevant variables and a random forest for the classification of events. The analysis has been developed using IceCube data from the 59-string configuration of the detector. 27,771 neutrino candidates were detected in 346 days of livetime. A rejection of 99.9999 % of the atmospheric muon background is achieved. The energy spectrum of the atmospheric neutrinomore » flux is obtained using the TRUEE unfolding program. The unfolded spectrum of atmospheric muon neutrinos covers an energy range from 100 GeV to 1 PeV. Compared to the previous measurement using the detector in the 40-string configuration, the analysis presented here, extends the upper end of the atmospheric neutrino spectrum by more than a factor of two, reaching an energy region that has not been previously accessed by spectral measurements.« less
Aartsen, M. G.; Ackermann, M.; Adams, J.; ...
2015-03-11
Here we present the development and application of a generic analysis scheme for the measurement of neutrino spectra with the IceCube detector. This scheme is based on regularized unfolding, preceded by an event selection which uses a Minimum Redundancy Maximum Relevance algorithm to select the relevant variables and a random forest for the classification of events. The analysis has been developed using IceCube data from the 59-string configuration of the detector. 27,771 neutrino candidates were detected in 346 days of livetime. A rejection of 99.9999 % of the atmospheric muon background is achieved. The energy spectrum of the atmospheric neutrinomore » flux is obtained using the TRUEE unfolding program. The unfolded spectrum of atmospheric muon neutrinos covers an energy range from 100 GeV to 1 PeV. Compared to the previous measurement using the detector in the 40-string configuration, the analysis presented here, extends the upper end of the atmospheric neutrino spectrum by more than a factor of two, reaching an energy region that has not been previously accessed by spectral measurements.« less
Versteeg, Bart; Bruisten, Sylvia M; van der Ende, Arie; Pannekoek, Yvonne
2016-04-18
Chlamydia trachomatis infections remain the most common bacterial sexually transmitted infection worldwide. To gain more insight into the epidemiology and transmission of C. trachomatis, several schemes of multilocus sequence typing (MLST) have been developed. We investigated the clustering of C. trachomatis strains derived from men who have sex with men (MSM) and heterosexuals using the MLST scheme based on 7 housekeeping genes (MLST-7) adapted for clinical specimens and a high-resolution MLST scheme based on 6 polymorphic genes, including ompA (hr-MLST-6). Specimens from 100 C. trachomatis infected men who have sex with men (MSM) and 100 heterosexual women were randomly selected from previous studies and sequenced. We adapted the MLST-7 scheme to a nested assay to be suitable for direct typing of clinical specimens. All selected specimens were typed using both the adapted MLST-7 scheme and the hr-MLST-6 scheme. Clustering of C. trachomatis strains derived from MSM and heterosexuals was assessed using minimum spanning tree analysis. Sufficient chlamydial DNA was present in 188 of the 200 (94 %) selected samples. Using the adapted MLST-7 scheme, full MLST profiles were obtained for 187 of 188 tested specimens resulting in a high success rate of 99.5 %. Of these 187 specimens, 91 (48.7 %) were from MSM and 96 (51.3 %) from heterosexuals. We detected 21 sequence types (STs) using the adapted MLST-7 and 79 STs using the hr-MLST-6 scheme. Minimum spanning tree analyses was used to examine the clustering of MLST-7 data, which showed no reflection of separate transmission in MSM and heterosexual hosts. Moreover, typing using the hr-MLST-6 scheme identified genetically related clusters within each of clusters that were identified by using the MLST-7 scheme. No distinct transmission of C. trachomatis could be observed in MSM and heterosexuals using the adapted MLST-7 scheme in contrast to using the hr-MLST-6. In addition, we compared clustering of both MLST schemes and demonstrated that typing using the hr-MLST-6 scheme is able to identify genetically related clusters of C. trachomatis strains within each of the clusters that were identified by using the MLST-7 scheme.
Efficient quantum transmission in multiple-source networks.
Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun
2014-04-02
A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency.
A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM) System
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-01-01
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972
López-Larraz, Eduardo; Ibáñez, Jaime; Trincado-Alonso, Fernando; Monge-Pereira, Esther; Pons, José Luis; Montesano, Luis
2017-12-17
Motor rehabilitation based on the association of electroencephalographic (EEG) activity and proprioceptive feedback has been demonstrated as a feasible therapy for patients with paralysis. To promote long-lasting motor recovery, these interventions have to be carried out across several weeks or even months. The success of these therapies partly relies on the performance of the system decoding movement intentions, which normally has to be recalibrated to deal with the nonstationarities of the cortical activity. Minimizing the recalibration times is important to reduce the setup preparation and maximize the effective therapy time. To date, a systematic analysis of the effect of recalibration strategies in EEG-driven interfaces for motor rehabilitation has not yet been performed. Data from patients with stroke (4 patients, 8 sessions) and spinal cord injury (SCI) (4 patients, 5 sessions) undergoing two different paradigms (self-paced and cue-guided, respectively) are used to study the performance of the EEG-based classification of motor intentions. Four calibration schemes are compared, considering different combinations of training datasets from previous and/or the validated session. The results show significant differences in classifier performances in terms of the true and false positives (TPs) and (FPs). Combining training data from previous sessions with data from the validation session provides the best compromise between the amount of data needed for calibration and the classifier performance. With this scheme, the average true (false) positive rates obtained are 85.3% (17.3%) and 72.9% (30.3%) for the self-paced and the cue-guided protocols, respectively. These results suggest that the use of optimal recalibration schemes for EEG-based classifiers of motor intentions leads to enhanced performances of these technologies, while not requiring long calibration phases prior to starting the intervention.
Computerized planning of prostate cryosurgery using variable cryoprobe insertion depth.
Rossi, Michael R; Tanaka, Daigo; Shimada, Kenji; Rabin, Yoed
2010-02-01
The current study presents a computerized planning scheme for prostate cryosurgery using a variable insertion depth strategy. This study is a part of an ongoing effort to develop computerized tools for cryosurgery. Based on typical clinical practices, previous automated planning schemes have required that all cryoprobes be aligned at a single insertion depth. The current study investigates the benefit of removing this constraint, in comparison with results based on uniform insertion depth planning as well as the so-called "pullback procedure". Planning is based on the so-called "bubble-packing method", and its quality is evaluated with bioheat transfer simulations. This study is based on five 3D prostate models, reconstructed from ultrasound imaging, and cryoprobe active length in the range of 15-35 mm. The variable insertion depth technique is found to consistently provide superior results when compared to the other placement methods. Furthermore, it is shown that both the optimal active length and the optimal number of cryoprobes vary among prostate models, based on the size and shape of the target region. Due to its low computational cost, the new scheme can be used to determine the optimal cryoprobe layout for a given prostate model in real time. Copyright 2008 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Bates, J. R.; Semazzi, F. H. M.; Higgins, R. W.; Barros, Saulo R. M.
1990-01-01
A vector semi-Lagrangian semi-implicit two-time-level finite-difference integration scheme for the shallow water equations on the sphere is presented. A C-grid is used for the spatial differencing. The trajectory-centered discretization of the momentum equation in vector form eliminates pole problems and, at comparable cost, gives greater accuracy than a previous semi-Lagrangian finite-difference scheme which used a rotated spherical coordinate system. In terms of the insensitivity of the results to increasing timestep, the new scheme is as successful as recent spectral semi-Lagrangian schemes. In addition, the use of a multigrid method for solving the elliptic equation for the geopotential allows efficient integration with an operation count which, at high resolution, is of lower order than in the case of the spectral models. The properties of the new scheme should allow finite-difference models to compete with spectral models more effectively than has previously been possible.
Multiple-3D-object secure information system based on phase shifting method and single interference.
Li, Wei-Na; Shi, Chen-Xiao; Piao, Mei-Lan; Kim, Nam
2016-05-20
We propose a multiple-3D-object secure information system for encrypting multiple three-dimensional (3D) objects based on the three-step phase shifting method. During the decryption procedure, five phase functions (PFs) are decreased to three PFs, in comparison with our previous method, which implies that one cross beam splitter is utilized to implement the single decryption interference. Moreover, the advantages of the proposed scheme also include: each 3D object can be decrypted discretionarily without decrypting a series of other objects earlier; the quality of the decrypted slice image of each object is high according to the correlation coefficient values, none of which is lower than 0.95; no iterative algorithm is involved. The feasibility of the proposed scheme is demonstrated by computer simulation results.
Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2015-11-01
The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.
Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme
Li, Shanbin; Sauter, Dominique; Xu, Bugong
2011-01-01
In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590
Introduction of the Floquet-Magnus expansion in solid-state nuclear magnetic resonance spectroscopy.
Mananga, Eugène S; Charpentier, Thibault
2011-07-28
In this article, we present an alternative expansion scheme called Floquet-Magnus expansion (FME) used to solve a time-dependent linear differential equation which is a central problem in quantum physics in general and solid-state nuclear magnetic resonance (NMR) in particular. The commonly used methods to treat theoretical problems in solid-state NMR are the average Hamiltonian theory (AHT) and the Floquet theory (FT), which have been successful for designing sophisticated pulse sequences and understanding of different experiments. To the best of our knowledge, this is the first report of the FME scheme in the context of solid state NMR and we compare this approach with other series expansions. We present a modified FME scheme highlighting the importance of the (time-periodic) boundary conditions. This modified scheme greatly simplifies the calculation of higher order terms and shown to be equivalent to the Floquet theory (single or multimode time-dependence) but allows one to derive the effective Hamiltonian in the Hilbert space. Basic applications of the FME scheme are described and compared to previous treatments based on AHT, FT, and static perturbation theory. We discuss also the convergence aspects of the three schemes (AHT, FT, and FME) and present the relevant references. © 2011 American Institute of Physics
Nonlinear calculations of the time evolution of black hole accretion disks
NASA Technical Reports Server (NTRS)
Luo, C.
1994-01-01
Based on previous works on black hole accretion disks, I continue to explore the disk dynamics using the finite difference method to solve the highly nonlinear problem of time-dependent alpha disk equations. Here a radially zoned model is used to develop a computational scheme in order to accommodate functional dependence of the viscosity parameter alpha on the disk scale height and/or surface density. This work is based on the author's previous work on the steady disk structure and the linear analysis of disk dynamics to try to apply to x-ray emissions from black candidates (i.e., multiple-state spectra, instabilities, QPO's, etc.).
21 THz quantum-cascade laser operating up to 144 K based on a scattering-assisted injection design
Khanal, Sudeep; Reno, John L.; Kumar, Sushil
2015-07-22
A 2.1 THz quantum cascade laser (QCL) based on a scattering-assisted injection and resonant-phonon depopulation design scheme is demonstrated. The QCL is based on a four-well period implemented in the GaAs/Al 0.15Ga 0.85As material system. The QCL operates up to a heat-sink temperature of 144 K in pulsed-mode, which is considerably higher than that achieved for previously reported THz QCLs operating around the frequency of 2 THz. At 46 K, the threshold current-density was measured as ~745 A/cm 2 with a peak-power output of ~10 mW. Electrically stable operation in a positive differential-resistance regime is achieved by a careful choicemore » of design parameters. The results validate the robustness of scattering-assisted injection schemes for development of low-frequency (ν < 2.5 THz) QCLs.« less
2.1 THz quantum-cascade laser operating up to 144 K based on a scattering-assisted injection design.
Khanal, Sudeep; Reno, John L; Kumar, Sushil
2015-07-27
A 2.1 THz quantum cascade laser (QCL) based on a scattering-assisted injection and resonant-phonon depopulation design scheme is demonstrated. The QCL is based on a four-well period implemented in the GaAs/Al0.15Ga0.85As material system. The QCL operates up to a heat-sink temperature of 144 K in pulsed-mode, which is considerably higher than that achieved for previously reported THz QCLs operating around the frequency of 2 THz. At 46 K, the threshold current-density was measured as ∼ 745 A/cm2 with a peak-power output of ∼10 mW. Electrically stable operation in a positive differential-resistance regime is achieved by a careful choice of design parameters. The results validate the robustness of scattering-assisted injection schemes for development of low-frequency (ν < 2.5 THz) QCLs.
Visual Privacy by Context: Proposal and Evaluation of a Level-Based Visualisation Scheme
Padilla-López, José Ramón; Chaaraoui, Alexandros Andre; Gu, Feng; Flórez-Revuelta, Francisco
2015-01-01
Privacy in image and video data has become an important subject since cameras are being installed in an increasing number of public and private spaces. Specifically, in assisted living, intelligent monitoring based on computer vision can allow one to provide risk detection and support services that increase people's autonomy at home. In the present work, a level-based visualisation scheme is proposed to provide visual privacy when human intervention is necessary, such as at telerehabilitation and safety assessment applications. Visualisation levels are dynamically selected based on the previously modelled context. In this way, different levels of protection can be provided, maintaining the necessary intelligibility required for the applications. Furthermore, a case study of a living room, where a top-view camera is installed, is presented. Finally, the performed survey-based evaluation indicates the degree of protection provided by the different visualisation models, as well as the personal privacy preferences and valuations of the users. PMID:26053746
Zunder, Eli R.; Finck, Rachel; Behbehani, Gregory K.; Amir, El-ad D.; Krishnaswamy, Smita; Gonzalez, Veronica D.; Lorang, Cynthia G.; Bjornson, Zach; Spitzer, Matthew H.; Bodenmiller, Bernd; Fantl, Wendy J.; Pe’er, Dana; Nolan, Garry P.
2015-01-01
SUMMARY Mass-tag cell barcoding (MCB) labels individual cell samples with unique combinatorial barcodes, after which they are pooled for processing and measurement as a single multiplexed sample. The MCB method eliminates variability between samples in antibody staining and instrument sensitivity, reduces antibody consumption, and shortens instrument measurement time. Here, we present an optimized MCB protocol with several improvements over previously described methods. The use of palladium-based labeling reagents expands the number of measurement channels available for mass cytometry and reduces interference with lanthanide-based antibody measurement. An error-detecting combinatorial barcoding scheme allows cell doublets to be identified and removed from the analysis. A debarcoding algorithm that is single cell-based rather than population-based improves the accuracy and efficiency of sample deconvolution. This debarcoding algorithm has been packaged into software that allows rapid and unbiased sample deconvolution. The MCB procedure takes 3–4 h, not including sample acquisition time of ~1 h per million cells. PMID:25612231
NASA Astrophysics Data System (ADS)
Zhang, Hongtao; Wang, Pengfei
2012-06-01
The current schemes of detecting the status of passengers in airplanes cannot satisfy the more strict regulations recently released by the United States Transportation Security Administration. In basis of investigation on the current seat occupancy sensors for vehicles, in this paper we present a novel scheme of seat occupancy sensors based on Fiber Bragg Grating technology to improve the in-flight security of airplanes. This seat occupancy sensor system can be used to detect the status of passengers and to trigger the airbags to control the inflation of air bags, which have been installed in the airplanes of some major airlines under the new law. This scheme utilizes our previous research results of Weight-In- Motion sensor system based on optical fiber Bragg grating. In contrast to the current seat occupancy sensors for vehicles, this new seat occupancy sensor has so many merits that it is very suitable to be applied in aerospace industry or high speed railway system. Moreover, combined with existing Fiber Bragg Grating strain or temperature sensor systems built in airplanes, this proposed method can construct a complete airline passenger management system.
Talebi, H A; Khorasani, K; Tafazoli, S
2009-01-01
This paper presents a robust fault detection and isolation (FDI) scheme for a general class of nonlinear systems using a neural-network-based observer strategy. Both actuator and sensor faults are considered. The nonlinear system considered is subject to both state and sensor uncertainties and disturbances. Two recurrent neural networks are employed to identify general unknown actuator and sensor faults, respectively. The neural network weights are updated according to a modified backpropagation scheme. Unlike many previous methods developed in the literature, our proposed FDI scheme does not rely on availability of full state measurements. The stability of the overall FDI scheme in presence of unknown sensor and actuator faults as well as plant and sensor noise and uncertainties is shown by using the Lyapunov's direct method. The stability analysis developed requires no restrictive assumptions on the system and/or the FDI algorithm. Magnetorquer-type actuators and magnetometer-type sensors that are commonly employed in the attitude control subsystem (ACS) of low-Earth orbit (LEO) satellites for attitude determination and control are considered in our case studies. The effectiveness and capabilities of our proposed fault diagnosis strategy are demonstrated and validated through extensive simulation studies.
An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data.
Fout, N; Ma, Kwan-Liu
2012-12-01
In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.
Deng, Yong-Yuan; Chen, Chin-Ling; Tsaur, Woei-Jiunn; Tang, Yung-Wen; Chen, Jung-Hsuan
2017-12-15
As sensor networks and cloud computation technologies have rapidly developed over recent years, many services and applications integrating these technologies into daily life have come together as an Internet of Things (IoT). At the same time, aging populations have increased the need for expanded and more efficient elderly care services. Fortunately, elderly people can now wear sensing devices which relay data to a personal wireless device, forming a body area network (BAN). These personal wireless devices collect and integrate patients' personal physiological data, and then transmit the data to the backend of the network for related diagnostics. However, a great deal of the information transmitted by such systems is sensitive data, and must therefore be subject to stringent security protocols. Protecting this data from unauthorized access is thus an important issue in IoT-related research. In regard to a cloud healthcare environment, scholars have proposed a secure mechanism to protect sensitive patient information. Their schemes provide a general architecture; however, these previous schemes still have some vulnerability, and thus cannot guarantee complete security. This paper proposes a secure and lightweight body-sensor network based on the Internet of Things for cloud healthcare environments, in order to address the vulnerabilities discovered in previous schemes. The proposed authentication mechanism is applied to a medical reader to provide a more comprehensive architecture while also providing mutual authentication, and guaranteeing data integrity, user untraceability, and forward and backward secrecy, in addition to being resistant to replay attack.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee
2007-08-15
We have previously presented a knowledge-based computer-assisted detection (KB-CADe) system for the detection of mammographic masses. The system is designed to compare a query mammographic region with mammographic templates of known ground truth. The templates are stored in an adaptive knowledge database. Image similarity is assessed with information theoretic measures (e.g., mutual information) derived directly from the image histograms. A previous study suggested that the diagnostic performance of the system steadily improves as the knowledge database is initially enriched with more templates. However, as the database increases in size, an exhaustive comparison of the query case with each stored templatemore » becomes computationally burdensome. Furthermore, blind storing of new templates may result in redundancies that do not necessarily improve diagnostic performance. To address these concerns we investigated an entropy-based indexing scheme for improving the speed of analysis and for satisfying database storage restrictions without compromising the overall diagnostic performance of our KB-CADe system. The indexing scheme was evaluated on two different datasets as (i) a search mechanism to sort through the knowledge database, and (ii) a selection mechanism to build a smaller, concise knowledge database that is easier to maintain but still effective. There were two important findings in the study. First, entropy-based indexing is an effective strategy to identify fast a subset of templates that are most relevant to a given query. Only this subset could be analyzed in more detail using mutual information for optimized decision making regarding the query. Second, a selective entropy-based deposit strategy may be preferable where only high entropy cases are maintained in the knowledge database. Overall, the proposed entropy-based indexing scheme was shown to reduce the computational cost of our KB-CADe system by 55% to 80% while maintaining the system's diagnostic performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valdarnini, R., E-mail: valda@sissa.it
In this paper, we present results from a series of hydrodynamical tests aimed at validating the performance of a smoothed particle hydrodynamics (SPH) formulation in which gradients are derived from an integral approach. We specifically investigate the code behavior with subsonic flows, where it is well known that zeroth-order inconsistencies present in standard SPH make it particularly problematic to correctly model the fluid dynamics. In particular, we consider the Gresho–Chan vortex problem, the growth of Kelvin–Helmholtz instabilities, the statistics of driven subsonic turbulence and the cold Keplerian disk problem. We compare simulation results for the different tests with those obtained,more » for the same initial conditions, using standard SPH. We also compare the results with the corresponding ones obtained previously with other numerical methods, such as codes based on a moving-mesh scheme or Godunov-type Lagrangian meshless methods. We quantify code performances by introducing error norms and spectral properties of the particle distribution, in a way similar to what was done in other works. We find that the new SPH formulation exhibits strongly reduced gradient errors and outperforms standard SPH in all of the tests considered. In fact, in terms of accuracy, we find good agreement between the simulation results of the new scheme and those produced using other recently proposed numerical schemes. These findings suggest that the proposed method can be successfully applied for many astrophysical problems in which the presence of subsonic flows previously limited the use of SPH, with the new scheme now being competitive in these regimes with other numerical methods.« less
Yang, Hui; Zhang, Jie; Ji, Yuefeng; Tian, Rui; Han, Jianrui; Lee, Young
2015-11-30
Data center interconnect with elastic optical network is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented multi-stratum resilience between IP and elastic optical networks that allows to accommodate data center services. In view of this, this study extends to consider the resource integration by breaking the limit of network device, which can enhance the resource utilization. We propose a novel multi-stratum resources integration (MSRI) architecture based on network function virtualization in software defined elastic data center optical interconnect. A resource integrated mapping (RIM) scheme for MSRI is introduced in the proposed architecture. The MSRI can accommodate the data center services with resources integration when the single function or resource is relatively scarce to provision the services, and enhance globally integrated optimization of optical network and application resources. The overall feasibility and efficiency of the proposed architecture are experimentally verified on the control plane of OpenFlow-based enhanced software defined networking (eSDN) testbed. The performance of RIM scheme under heavy traffic load scenario is also quantitatively evaluated based on MSRI architecture in terms of path blocking probability, provisioning latency and resource utilization, compared with other provisioning schemes.
Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S
2003-01-01
In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).
Secure Nearest Neighbor Query on Crowd-Sensing Data
Cheng, Ke; Wang, Liangmin; Zhong, Hong
2016-01-01
Nearest neighbor queries are fundamental in location-based services, and secure nearest neighbor queries mainly focus on how to securely and quickly retrieve the nearest neighbor in the outsourced cloud server. However, the previous big data system structure has changed because of the crowd-sensing data. On the one hand, sensing data terminals as the data owner are numerous and mistrustful, while, on the other hand, in most cases, the terminals find it difficult to finish many safety operation due to computation and storage capability constraints. In light of they Multi Owners and Multi Users (MOMU) situation in the crowd-sensing data cloud environment, this paper presents a secure nearest neighbor query scheme based on the proxy server architecture, which is constructed by protocols of secure two-party computation and secure Voronoi diagram algorithm. It not only preserves the data confidentiality and query privacy but also effectively resists the collusion between the cloud server and the data owners or users. Finally, extensive theoretical and experimental evaluations are presented to show that our proposed scheme achieves a superior balance between the security and query performance compared to other schemes. PMID:27669253
An almost symmetric Strang splitting scheme for nonlinear evolution equations.
Einkemmer, Lukas; Ostermann, Alexander
2014-07-01
In this paper we consider splitting methods for the time integration of parabolic and certain classes of hyperbolic partial differential equations, where one partial flow cannot be computed exactly. Instead, we use a numerical approximation based on the linearization of the vector field. This is of interest in applications as it allows us to apply splitting methods to a wider class of problems from the sciences. However, in the situation described, the classic Strang splitting scheme, while still being a method of second order, is not longer symmetric. This, in turn, implies that the construction of higher order methods by composition is limited to order three only. To remedy this situation, based on previous work in the context of ordinary differential equations, we construct a class of Strang splitting schemes that are symmetric up to a desired order. We show rigorously that, under suitable assumptions on the nonlinearity, these methods are of second order and can then be used to construct higher order methods by composition. In addition, we illustrate the theoretical results by conducting numerical experiments for the Brusselator system and the KdV equation.
An almost symmetric Strang splitting scheme for nonlinear evolution equations☆
Einkemmer, Lukas; Ostermann, Alexander
2014-01-01
In this paper we consider splitting methods for the time integration of parabolic and certain classes of hyperbolic partial differential equations, where one partial flow cannot be computed exactly. Instead, we use a numerical approximation based on the linearization of the vector field. This is of interest in applications as it allows us to apply splitting methods to a wider class of problems from the sciences. However, in the situation described, the classic Strang splitting scheme, while still being a method of second order, is not longer symmetric. This, in turn, implies that the construction of higher order methods by composition is limited to order three only. To remedy this situation, based on previous work in the context of ordinary differential equations, we construct a class of Strang splitting schemes that are symmetric up to a desired order. We show rigorously that, under suitable assumptions on the nonlinearity, these methods are of second order and can then be used to construct higher order methods by composition. In addition, we illustrate the theoretical results by conducting numerical experiments for the Brusselator system and the KdV equation. PMID:25844017
Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.
Lan, Cuiling; Shi, Guangming; Wu, Feng
2010-04-01
Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.
Secure Nearest Neighbor Query on Crowd-Sensing Data.
Cheng, Ke; Wang, Liangmin; Zhong, Hong
2016-09-22
Nearest neighbor queries are fundamental in location-based services, and secure nearest neighbor queries mainly focus on how to securely and quickly retrieve the nearest neighbor in the outsourced cloud server. However, the previous big data system structure has changed because of the crowd-sensing data. On the one hand, sensing data terminals as the data owner are numerous and mistrustful, while, on the other hand, in most cases, the terminals find it difficult to finish many safety operation due to computation and storage capability constraints. In light of they Multi Owners and Multi Users (MOMU) situation in the crowd-sensing data cloud environment, this paper presents a secure nearest neighbor query scheme based on the proxy server architecture, which is constructed by protocols of secure two-party computation and secure Voronoi diagram algorithm. It not only preserves the data confidentiality and query privacy but also effectively resists the collusion between the cloud server and the data owners or users. Finally, extensive theoretical and experimental evaluations are presented to show that our proposed scheme achieves a superior balance between the security and query performance compared to other schemes.
Binary counting with chemical reactions.
Kharam, Aleksandra; Jiang, Hua; Riedel, Marc; Parhi, Keshab
2011-01-01
This paper describes a scheme for implementing a binary counter with chemical reactions. The value of the counter is encoded by logical values of "0" and "1" that correspond to the absence and presence of specific molecular types, respectively. It is incremented when molecules of a trigger type are injected. Synchronization is achieved with reactions that produce a sustained three-phase oscillation. This oscillation plays a role analogous to a clock signal in digital electronics. Quantities are transferred between molecular types in different phases of the oscillation. Unlike all previous schemes for chemical computation, this scheme is dependent only on coarse rate categories for the reactions ("fast" and "slow"). Given such categories, the computation is exact and independent of the specific reaction rates. Although conceptual for the time being, the methodology has potential applications in domains of synthetic biology such as biochemical sensing and drug delivery. We are exploring DNA-based computation via strand displacement as a possible experimental chassis.
Realization of Quantum Digital Signatures without the Requirement of Quantum Memory
NASA Astrophysics Data System (ADS)
Collins, Robert J.; Donaldson, Ross J.; Dunjko, Vedran; Wallden, Petros; Clarke, Patrick J.; Andersson, Erika; Jeffers, John; Buller, Gerald S.
2014-07-01
Digital signatures are widely used to provide security for electronic communications, for example, in financial transactions and electronic mail. Currently used classical digital signature schemes, however, only offer security relying on unproven computational assumptions. In contrast, quantum digital signatures offer information-theoretic security based on laws of quantum mechanics. Here, security against forging relies on the impossibility of perfectly distinguishing between nonorthogonal quantum states. A serious drawback of previous quantum digital signature schemes is that they require long-term quantum memory, making them impractical at present. We present the first realization of a scheme that does not need quantum memory and which also uses only standard linear optical components and photodetectors. In our realization, the recipients measure the distributed quantum signature states using a new type of quantum measurement, quantum state elimination. This significantly advances quantum digital signatures as a quantum technology with potential for real applications.
Efficient Quantum Transmission in Multiple-Source Networks
Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun
2014-01-01
A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency. PMID:24691590
Realization of quantum digital signatures without the requirement of quantum memory.
Collins, Robert J; Donaldson, Ross J; Dunjko, Vedran; Wallden, Petros; Clarke, Patrick J; Andersson, Erika; Jeffers, John; Buller, Gerald S
2014-07-25
Digital signatures are widely used to provide security for electronic communications, for example, in financial transactions and electronic mail. Currently used classical digital signature schemes, however, only offer security relying on unproven computational assumptions. In contrast, quantum digital signatures offer information-theoretic security based on laws of quantum mechanics. Here, security against forging relies on the impossibility of perfectly distinguishing between nonorthogonal quantum states. A serious drawback of previous quantum digital signature schemes is that they require long-term quantum memory, making them impractical at present. We present the first realization of a scheme that does not need quantum memory and which also uses only standard linear optical components and photodetectors. In our realization, the recipients measure the distributed quantum signature states using a new type of quantum measurement, quantum state elimination. This significantly advances quantum digital signatures as a quantum technology with potential for real applications.
Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks.
Pena, Rodrigo F O; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C; Lindner, Benjamin
2018-01-01
Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.
Analysis of composite ablators using massively parallel computation
NASA Technical Reports Server (NTRS)
Shia, David
1995-01-01
In this work, the feasibility of using massively parallel computation to study the response of ablative materials is investigated. Explicit and implicit finite difference methods are used on a massively parallel computer, the Thinking Machines CM-5. The governing equations are a set of nonlinear partial differential equations. The governing equations are developed for three sample problems: (1) transpiration cooling, (2) ablative composite plate, and (3) restrained thermal growth testing. The transpiration cooling problem is solved using a solution scheme based solely on the explicit finite difference method. The results are compared with available analytical steady-state through-thickness temperature and pressure distributions and good agreement between the numerical and analytical solutions is found. It is also found that a solution scheme based on the explicit finite difference method has the following advantages: incorporates complex physics easily, results in a simple algorithm, and is easily parallelizable. However, a solution scheme of this kind needs very small time steps to maintain stability. A solution scheme based on the implicit finite difference method has the advantage that it does not require very small times steps to maintain stability. However, this kind of solution scheme has the disadvantages that complex physics cannot be easily incorporated into the algorithm and that the solution scheme is difficult to parallelize. A hybrid solution scheme is then developed to combine the strengths of the explicit and implicit finite difference methods and minimize their weaknesses. This is achieved by identifying the critical time scale associated with the governing equations and applying the appropriate finite difference method according to this critical time scale. The hybrid solution scheme is then applied to the ablative composite plate and restrained thermal growth problems. The gas storage term is included in the explicit pressure calculation of both problems. Results from ablative composite plate problems are compared with previous numerical results which did not include the gas storage term. It is found that the through-thickness temperature distribution is not affected much by the gas storage term. However, the through-thickness pressure and stress distributions, and the extent of chemical reactions are different from the previous numerical results. Two types of chemical reaction models are used in the restrained thermal growth testing problem: (1) pressure-independent Arrhenius type rate equations and (2) pressure-dependent Arrhenius type rate equations. The numerical results are compared to experimental results and the pressure-dependent model is able to capture the trend better than the pressure-independent one. Finally, a performance study is done on the hybrid algorithm using the ablative composite plate problem. It is found that there is a good speedup of performance on the CM-5. For 32 CPU's, the speedup of performance is 20. The efficiency of the algorithm is found to be a function of the size and execution time of a given problem and the effective parallelization of the algorithm. It also seems that there is an optimum number of CPU's to use for a given problem.
NASA Technical Reports Server (NTRS)
Dutta, Soumyo; Way, David W.
2017-01-01
Mars 2020, the next planned U.S. rover mission to land on Mars, is based on the design of the successful 2012 Mars Science Laboratory (MSL) mission. Mars 2020 retains most of the entry, descent, and landing (EDL) sequences of MSL, including the closed-loop entry guidance scheme based on the Apollo guidance algorithm. However, unlike MSL, Mars 2020 will trigger the parachute deployment and descent sequence on range trigger rather than the previously used velocity trigger. This difference will greatly reduce the landing ellipse sizes. Additionally, the relative contribution of each models to the total ellipse sizes have changed greatly due to the switch to range trigger. This paper considers the effect on trajectory dispersions due to changing the trigger schemes and the contributions of these various models to trajectory and EDL performance.
NASA Astrophysics Data System (ADS)
Ivanov, Mark V.; Lobas, Anna A.; Levitsky, Lev I.; Moshkovskii, Sergei A.; Gorshkov, Mikhail V.
2018-02-01
In a proteogenomic approach based on tandem mass spectrometry analysis of proteolytic peptide mixtures, customized exome or RNA-seq databases are employed for identifying protein sequence variants. However, the problem of variant peptide identification without personalized genomic data is important for a variety of applications. Following the recent proposal by Chick et al. (Nat. Biotechnol. 33, 743-749, 2015) on the feasibility of such variant peptide search, we evaluated two available approaches based on the previously suggested "open" search and the "brute-force" strategy. To improve the efficiency of these approaches, we propose an algorithm for exclusion of false variant identifications from the search results involving analysis of modifications mimicking single amino acid substitutions. Also, we propose a de novo based scoring scheme for assessment of identified point mutations. In the scheme, the search engine analyzes y-type fragment ions in MS/MS spectra to confirm the location of the mutation in the variant peptide sequence.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2001-01-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2000-12-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
A scheme for a flexible classification of dietary and health biomarkers.
Gao, Qian; Praticò, Giulia; Scalbert, Augustin; Vergères, Guy; Kolehmainen, Marjukka; Manach, Claudine; Brennan, Lorraine; Afman, Lydia A; Wishart, David S; Andres-Lacueva, Cristina; Garcia-Aloy, Mar; Verhagen, Hans; Feskens, Edith J M; Dragsted, Lars O
2017-01-01
Biomarkers are an efficient means to examine intakes or exposures and their biological effects and to assess system susceptibility. Aided by novel profiling technologies, the biomarker research field is undergoing rapid development and new putative biomarkers are continuously emerging in the scientific literature. However, the existing concepts for classification of biomarkers in the dietary and health area may be ambiguous, leading to uncertainty about their application. In order to better understand the potential of biomarkers and to communicate their use and application, it is imperative to have a solid scheme for biomarker classification that will provide a well-defined ontology for the field. In this manuscript, we provide an improved scheme for biomarker classification based on their intended use rather than the technology or outcomes (six subclasses are suggested: food compound intake biomarkers (FCIBs), food or food component intake biomarkers (FIBs), dietary pattern biomarkers (DPBs), food compound status biomarkers (FCSBs), effect biomarkers, physiological or health state biomarkers). The application of this scheme is described in detail for the dietary and health area and is compared with previous biomarker classification for this field of research.
Liang, Yunlei; Du, Zhijiang; Sun, Lining
2017-01-01
The tendon driven mechanism using a cable and pulley to transmit power is adopted by many surgical robots. However, backlash hysteresis objectively exists in cable-pulley mechanisms, and this nonlinear problem is a great challenge in precise position control during the surgical procedure. Previous studies mainly focused on the transmission characteristics of the cable-driven system and constructed transmission models under particular assumptions to solve nonlinear problems. However, these approaches are limited because the modeling process is complex and the transmission models lack general applicability. This paper presents a novel position compensation control scheme to reduce the impact of backlash hysteresis on the positioning accuracy of surgical robots’ end-effectors. In this paper, a position compensation scheme using a support vector machine based on feedforward control is presented to reduce the position tracking error. To validate the proposed approach, experimental validations are conducted on our cable-pulley system and comparative experiments are carried out. The results show remarkable improvements in the performance of reducing the positioning error for the use of the proposed scheme. PMID:28974011
NASA Astrophysics Data System (ADS)
Ji, Xing; Zhao, Fengxiang; Shyy, Wei; Xu, Kun
2018-03-01
Most high order computational fluid dynamics (CFD) methods for compressible flows are based on Riemann solver for the flux evaluation and Runge-Kutta (RK) time stepping technique for temporal accuracy. The advantage of this kind of space-time separation approach is the easy implementation and stability enhancement by introducing more middle stages. However, the nth-order time accuracy needs no less than n stages for the RK method, which can be very time and memory consuming due to the reconstruction at each stage for a high order method. On the other hand, the multi-stage multi-derivative (MSMD) method can be used to achieve the same order of time accuracy using less middle stages with the use of the time derivatives of the flux function. For traditional Riemann solver based CFD methods, the lack of time derivatives in the flux function prevents its direct implementation of the MSMD method. However, the gas kinetic scheme (GKS) provides such a time accurate evolution model. By combining the second-order or third-order GKS flux functions with the MSMD technique, a family of high order gas kinetic methods can be constructed. As an extension of the previous 2-stage 4th-order GKS, the 5th-order schemes with 2 and 3 stages will be developed in this paper. Based on the same 5th-order WENO reconstruction, the performance of gas kinetic schemes from the 2nd- to the 5th-order time accurate methods will be evaluated. The results show that the 5th-order scheme can achieve the theoretical order of accuracy for the Euler equations, and present accurate Navier-Stokes solutions as well due to the coupling of inviscid and viscous terms in the GKS formulation. In comparison with Riemann solver based 5th-order RK method, the high order GKS has advantages in terms of efficiency, accuracy, and robustness, for all test cases. The 4th- and 5th-order GKS have the same robustness as the 2nd-order scheme for the capturing of discontinuous solutions. The current high order MSMD GKS is a multi-dimensional scheme with incorporation of both normal and tangential spatial derivatives of flow variables at a cell interface in the flux evaluation. The scheme can be extended straightforwardly to viscous flow computation in unstructured mesh. It provides a promising direction for the development of high-order CFD methods for the computation of complex flows, such as turbulence and acoustics with shock interactions.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Wu, Di; Lang, Stephen; Chern, Jiundar; Peters-Lidard, Christa; Fridlind, Ann; Matsui, Toshihisa
2015-01-01
The Goddard microphysics scheme was recently improved by adding a 4th ice class (frozen dropshail). This new 4ICE scheme was implemented and tested in the Goddard Cumulus Ensemble model (GCE) for an intense continental squall line and a moderate,less-organized continental case. Simulated peak radar reflectivity profiles were improved both in intensity and shape for both cases as were the overall reflectivity probability distributions versus observations. In this study, the new Goddard 4ICE scheme is implemented into the regional-scale NASA Unified - Weather Research and Forecasting model (NU-WRF) and tested on an intense mesoscale convective system that occurred during the Midlatitude Continental Convective Clouds Experiment (MC3E). The NU42WRF simulated radar reflectivities, rainfall intensities, and vertical and horizontal structure using the new 4ICE scheme agree as well as or significantly better with observations than when using previous versions of the Goddard 3ICE (graupel or hail) schemes. In the 4ICE scheme, the bin microphysics-based rain evaporation correction produces more erect convective cores, while modification of the unrealistic collection of ice by dry hail produces narrow and intense cores, allowing more slow-falling snow to be transported rearward. Together with a revised snow size mapping, the 4ICE scheme produces a more horizontally stratified trailing stratiform region with a broad, more coherent light rain area. In addition, the NU-WRF 4ICE simulated radar reflectivity distributions are consistent with and generally superior to those using the GCE due to the less restrictive open lateral boundaries
Lager, Malin; Mernelius, Sara; Löfgren, Sture; Söderman, Jan
2016-01-01
Healthcare-associated infections caused by Escherichia coli and antibiotic resistance due to extended-spectrum beta-lactamase (ESBL) production constitute a threat against patient safety. To identify, track, and control outbreaks and to detect emerging virulent clones, typing tools of sufficient discriminatory power that generate reproducible and unambiguous data are needed. A probe based real-time PCR method targeting multiple single nucleotide polymorphisms (SNP) was developed. The method was based on the multi locus sequence typing scheme of Institute Pasteur and by adaptation of previously described typing assays. An 8 SNP-panel that reached a Simpson's diversity index of 0.95 was established, based on analysis of sporadic E. coli cases (ESBL n = 27 and non-ESBL n = 53). This multi-SNP assay was used to identify the sequence type 131 (ST131) complex according to the Achtman's multi locus sequence typing scheme. However, it did not fully discriminate within the complex but provided a diagnostic signature that outperformed a previously described detection assay. Pulsed-field gel electrophoresis typing of isolates from a presumed outbreak (n = 22) identified two outbreaks (ST127 and ST131) and three different non-outbreak-related isolates. Multi-SNP typing generated congruent data except for one non-outbreak-related ST131 isolate. We consider multi-SNP real-time PCR typing an accessible primary generic E. coli typing tool for rapid and uniform type identification.
ERIC Educational Resources Information Center
Ma, Yongjun; Wan, Yanlan
2017-01-01
Based on previous international studies, a content analysis scheme has been designed and used from the perspective of culture to study the history of science (HOS) in science textbooks. Nineteen sets of Chinese science textbooks have been analyzed. It has been found that there are noticeable changes in the quantity, content, layout, presentation,…
A New Model for Simulating Gas Metal Arc Welding based on Phase Field Model
NASA Astrophysics Data System (ADS)
Jiang, Yongyue; Li, Li; Zhao, Zhijiang
2017-11-01
Lots of physical process, such as metal melting, multiphase fluids flow, heat and mass transfer and thermocapillary effect (Marangoni) and so on, will occur in gas metal arc welding (GMAW) which should be considered as a mixture system. In this paper, based on the previous work, we propose a new model to simulate GMAW including Navier-Stokes equation, the phase field model and energy equation. Unlike most previous work, we take the thermocapillary effect into the phase field model considering mixture energy which is different of volume of fluid method (VOF) widely used in GMAW before. We also consider gravity, electromagnetic force, surface tension, buoyancy effect and arc pressure in momentum equation. The spray transfer especially the projected transfer in GMAW is computed as numerical examples with a continuous finite element method and a modified midpoint scheme. Pulse current is set as welding current as the numerical example to show the numerical simulation of metal transfer which fits the theory of GMAW well. From the result compared with the data of high-speed photography and VOF model, the accuracy and stability of the model and scheme are easily validated and also the new model has the higher precieion.
From Three-Photon Greenberger-Horne-Zeilinger States to Ballistic Universal Quantum Computation.
Gimeno-Segovia, Mercedes; Shadbolt, Pete; Browne, Dan E; Rudolph, Terry
2015-07-10
Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. A series of increasingly efficient proposals have shown linear-optical quantum computing to be formally scalable. However, existing schemes typically require extensive adaptive switching, which is experimentally challenging and noisy, thousands of photon sources per renormalized qubit, and/or large quantum memories for repeat-until-success strategies. Our work overcomes all these problems. We present a scheme to construct a cluster state universal for quantum computation, which uses no adaptive switching, no large memories, and which is at least an order of magnitude more resource efficient than previous passive schemes. Unlike previous proposals, it is constructed entirely from loss-detecting gates and offers a robustness to photon loss. Even without the use of an active loss-tolerant encoding, our scheme naturally tolerates a total loss rate ∼1.6% in the photons detected in the gates. This scheme uses only 3 Greenberger-Horne-Zeilinger states as a resource, together with a passive linear-optical network. We fully describe and model the iterative process of cluster generation, including photon loss and gate failure. This demonstrates that building a linear-optical quantum computer needs to be less challenging than previously thought.
A disclosure scheme for protecting the victims of domestic violence.
Griffith, Richard
2017-06-08
Richard Griffith, Senior Lecturer in Health Law at Swansea University, explains how the Domestic Violence Disclosure Scheme aims to protect potential victims by allowing disclosure of a partner's previous crimes.
Evaluation of the Danish Leave Schemes. Summary of a Report.
ERIC Educational Resources Information Center
Andersen, Dines; Appeldorn, Alice; Weise, Hanne
An evaluation examined how the Danish leave schemes, an offer to employed and unemployed persons who qualify for unemployment benefits, were functioning and to what extent the objectives have been achieved. It was found that 60 percent of those taking leave had previously been unemployed; women accounted for two-thirds of those joining the scheme;…
Unified powered flight guidance
NASA Technical Reports Server (NTRS)
Brand, T. J.; Brown, D. W.; Higgins, J. P.
1973-01-01
A complete revision of the orbiter powered flight guidance scheme is presented. A unified approach to powered flight guidance was taken to accommodate all phases of exo-atmospheric orbiter powered flight, from ascent through deorbit. The guidance scheme was changed from the previous modified version of the Lambert Aim Point Maneuver Mode used in Apollo to one that employs linear tangent guidance concepts. This document replaces the previous ascent phase equation document.
Adaptive Packet Combining Scheme in Three State Channel Model
NASA Astrophysics Data System (ADS)
Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak
2018-01-01
The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.
A Crosslinker Based on a Tethered Electrophile for Mapping Kinase-Substrate Networks
Riel-Mehan, Megan M; Shokat, Kevan M
2014-01-01
SUMMARY Despite the continuing progress made towards mapping kinase signaling networks, there are still many phosphorylation events for which the responsible kinase has not yet been identified. We are interested in addressing this problem through forming covalent crosslinks between a peptide substrate and the corresponding phosphorylating kinase. Previously we reported a dialdehyde-based kinase binding probe capable of such a reaction with a peptide containing a cysteine substituted for the phosphorylatable ser/thr/tyr residue. Here, we examine the yield of a previously reported dialdehyde-based probe, and report that the dialdehyde based probes possesses a significant limitation in terms of crosslinked kinase-substrate product yield. To address this limitation, we develop a crosslinking scheme based on a kinase activity-based probe, and this new cross-linker provides an increase in efficiency and substrate specificity, including in the context of cell lysate. PMID:24746561
A Lightweight Data Integrity Scheme for Sensor Networks
Kamel, Ibrahim; Juma, Hussam
2011-01-01
Limited energy is the most critical constraint that limits the capabilities of wireless sensor networks (WSNs). Most sensors operate on batteries with limited power. Battery recharging or replacement may be impossible. Security mechanisms that are based on public key cryptographic algorithms such as RSA and digital signatures are prohibitively expensive in terms of energy consumption and storage requirements, and thus unsuitable for WSN applications. This paper proposes a new fragile watermarking technique to detect unauthorized alterations in WSN data streams. We propose the FWC-D scheme, which uses group delimiters to keep the sender and receivers synchronized and help them to avoid ambiguity in the event of data insertion or deletion. The watermark, which is computed using a hash function, is stored in the previous group in a linked-list fashion to ensure data freshness and mitigate replay attacks, FWC-D generates a serial number SN that is attached to each group to help the receiver determines how many group insertions or deletions occurred. Detailed security analysis that compares the proposed FWC-D scheme with SGW, one of the latest integrity schemes for WSNs, shows that FWC-D is more robust than SGW. Simulation results further show that the proposed scheme is much faster than SGW. PMID:22163840
Stabilized finite element methods to simulate the conductances of ion channels
NASA Astrophysics Data System (ADS)
Tu, Bin; Xie, Yan; Zhang, Linbo; Lu, Benzhuo
2015-03-01
We have previously developed a finite element simulator, ichannel, to simulate ion transport through three-dimensional ion channel systems via solving the Poisson-Nernst-Planck equations (PNP) and Size-modified Poisson-Nernst-Planck equations (SMPNP), and succeeded in simulating some ion channel systems. However, the iterative solution between the coupled Poisson equation and the Nernst-Planck equations has difficulty converging for some large systems. One reason we found is that the NP equations are advection-dominated diffusion equations, which causes troubles in the usual FE solution. The stabilized schemes have been applied to compute fluids flow in various research fields. However, they have not been studied in the simulation of ion transport through three-dimensional models based on experimentally determined ion channel structures. In this paper, two stabilized techniques, the SUPG and the Pseudo Residual-Free Bubble function (PRFB) are introduced to enhance the numerical robustness and convergence performance of the finite element algorithm in ichannel. The conductances of the voltage dependent anion channel (VDAC) and the anthrax toxin protective antigen pore (PA) are simulated to validate the stabilization techniques. Those two stabilized schemes give reasonable results for the two proteins, with decent agreement with both experimental data and Brownian dynamics (BD) simulations. For a variety of numerical tests, it is found that the simulator effectively avoids previous numerical instability after introducing the stabilization methods. Comparison based on our test data set between the two stabilized schemes indicates both SUPG and PRFB have similar performance (the latter is slightly more accurate and stable), while SUPG is relatively more convenient to implement.
Deng, Yong-Yuan; Chen, Chin-Ling; Tsaur, Woei-Jiunn; Tang, Yung-Wen; Chen, Jung-Hsuan
2017-01-01
As sensor networks and cloud computation technologies have rapidly developed over recent years, many services and applications integrating these technologies into daily life have come together as an Internet of Things (IoT). At the same time, aging populations have increased the need for expanded and more efficient elderly care services. Fortunately, elderly people can now wear sensing devices which relay data to a personal wireless device, forming a body area network (BAN). These personal wireless devices collect and integrate patients’ personal physiological data, and then transmit the data to the backend of the network for related diagnostics. However, a great deal of the information transmitted by such systems is sensitive data, and must therefore be subject to stringent security protocols. Protecting this data from unauthorized access is thus an important issue in IoT-related research. In regard to a cloud healthcare environment, scholars have proposed a secure mechanism to protect sensitive patient information. Their schemes provide a general architecture; however, these previous schemes still have some vulnerability, and thus cannot guarantee complete security. This paper proposes a secure and lightweight body-sensor network based on the Internet of Things for cloud healthcare environments, in order to address the vulnerabilities discovered in previous schemes. The proposed authentication mechanism is applied to a medical reader to provide a more comprehensive architecture while also providing mutual authentication, and guaranteeing data integrity, user untraceability, and forward and backward secrecy, in addition to being resistant to replay attack. PMID:29244776
Brock, Guy N; Shaffer, John R; Blakesley, Richard E; Lotz, Meredith J; Tseng, George C
2008-01-10
Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures x time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. We found that the optimal imputation algorithms (LSA, LLS, and BPCA) are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS) scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS) scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA) are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA) performed better on mcroarray data with lower complexity, while neighbour-based methods (KNN, OLS, LSA, LLS) performed better in data with higher complexity. We also found that the EBS and STS schemes serve as complementary and effective tools for selecting the optimal imputation algorithm.
NASA Astrophysics Data System (ADS)
Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.
2017-04-01
We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.
Veselka, Walter; Anderson, James T; Kordek, Walter S
2010-05-01
Considerable resources are being used to develop and implement bioassessment methods for wetlands to ensure that "biological integrity" is maintained under the United States Clean Water Act. Previous research has demonstrated that avian composition is susceptible to human impairments at multiple spatial scales. Using a site-specific disturbance gradient, we built avian wetland indices of biological integrity (AW-IBI) specific to two wetland classification schemes, one based on vegetative structure and the other based on the wetland's position in the landscape and sources of water. The resulting class-specific AW-IBI was comprised of one to four metrics that varied in their sensitivity to the disturbance gradient. Some of these metrics were specific to only one of the classification schemes, whereas others could discriminate varying levels of disturbance regardless of classification scheme. Overall, all of the derived biological indices specific to the vegetative structure-based classes of wetlands had a significant relation with the disturbance gradient; however, the biological index derived for floodplain wetlands exhibited a more consistent response to a local disturbance gradient. We suspect that the consistency of this response is due to the inherent nature of the connectivity of available habitat in floodplain wetlands.
In-TFT-array-process micro defect inspection using nonlinear principal component analysis.
Liu, Yi-Hung; Wang, Chi-Kai; Ting, Yung; Lin, Wei-Zhi; Kang, Zhi-Hao; Chen, Ching-Shun; Hwang, Jih-Shang
2009-11-20
Defect inspection plays a critical role in thin film transistor liquid crystal display (TFT-LCD) manufacture, and has received much attention in the field of automatic optical inspection (AOI). Previously, most focus was put on the problems of macro-scale Mura-defect detection in cell process, but it has recently been found that the defects which substantially influence the yield rate of LCD panels are actually those in the TFT array process, which is the first process in TFT-LCD manufacturing. Defect inspection in TFT array process is therefore considered a difficult task. This paper presents a novel inspection scheme based on kernel principal component analysis (KPCA) algorithm, which is a nonlinear version of the well-known PCA algorithm. The inspection scheme can not only detect the defects from the images captured from the surface of LCD panels, but also recognize the types of the detected defects automatically. Results, based on real images provided by a LCD manufacturer in Taiwan, indicate that the KPCA-based defect inspection scheme is able to achieve a defect detection rate of over 99% and a high defect classification rate of over 96% when the imbalanced support vector machine (ISVM) with 2-norm soft margin is employed as the classifier. More importantly, the inspection time is less than 1 s per input image.
Control of parallel manipulators using force feedback
NASA Technical Reports Server (NTRS)
Nanua, Prabjot
1994-01-01
Two control schemes are compared for parallel robotic mechanisms actuated by hydraulic cylinders. One scheme, the 'rate based scheme', uses the position and rate information only for feedback. The second scheme, the 'force based scheme' feeds back the force information also. The force control scheme is shown to improve the response over the rate control one. It is a simple constant gain control scheme better suited to parallel mechanisms. The force control scheme can be easily modified for the dynamic forces on the end effector. This paper presents the results of a computer simulation of both the rate and force control schemes. The gains in the force based scheme can be individually adjusted in all three directions, whereas the adjustment in just one direction of the rate based scheme directly affects the other two directions.
LWT Based Sensor Node Signal Processing in Vehicle Surveillance Distributed Sensor Network
NASA Astrophysics Data System (ADS)
Cha, Daehyun; Hwang, Chansik
Previous vehicle surveillance researches on distributed sensor network focused on overcoming power limitation and communication bandwidth constraints in sensor node. In spite of this constraints, vehicle surveillance sensor node must have signal compression, feature extraction, target localization, noise cancellation and collaborative signal processing with low computation and communication energy dissipation. In this paper, we introduce an algorithm for light-weight wireless sensor node signal processing based on lifting scheme wavelet analysis feature extraction in distributed sensor network.
NASA Astrophysics Data System (ADS)
Nji, Jones; Li, Guoqiang
2012-02-01
The purpose of this study is to investigate the potential of a shape-memory-polymer (SMP)-based particulate composite to heal structural-length scale damage with small thermoplastic additive contents through a close-then-heal (CTH) self-healing scheme that was introduced in a previous study (Li and Uppu 2010 Comput. Sci. Technol. 70 1419-27). The idea is to achieve reasonable healing efficiencies with minimal sacrifice in structural load capacity. By first closing cracks, the gap between two crack surfaces is narrowed and a lesser amount of thermoplastic particles is required to achieve healing. The particulate composite was fabricated by dispersing copolyester thermoplastic particles in a shape memory polymer matrix. It is found that, for small thermoplastic contents of less than 10%, the CTH scheme followed in this study heals structural-length scale damage in the SMP particulate composite to a meaningful extent and with less sacrifice of structural capacity.
Fuzzy Matching Based on Gray-scale Difference for Quantum Images
NASA Astrophysics Data System (ADS)
Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia
2018-05-01
Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.
Computer-aided detection of initial polyp candidates with level set-based adaptive convolution
NASA Astrophysics Data System (ADS)
Zhu, Hongbin; Duan, Chaijie; Liang, Zhengrong
2009-02-01
In order to eliminate or weaken the interference between different topological structures on the colon wall, adaptive and normalized convolution methods were used to compute the first and second order spatial derivatives of computed tomographic colonography images, which is the beginning of various geometric analyses. However, the performance of such methods greatly depends on the single-layer representation of the colon wall, which is called the starting layer (SL) in the following text. In this paper, we introduce a level set-based adaptive convolution (LSAC) method to compute the spatial derivatives, in which the level set method is employed to determine a more reasonable SL. The LSAC was applied to a computer-aided detection (CAD) scheme to detect the initial polyp candidates, and experiments showed that it benefits the CAD scheme in both the detection sensitivity and specificity as compared to our previous work.
77 FR 1009 - Airworthiness Directives; Rolls-Royce plc (RR) RB211-524 Series Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-09
... Field Repair Scheme FRS5367/B, and A mandatory terminating action to the repetitive inspections to be... Repaired Using RR Field Repair Scheme FRS5367/B Borescope-inspect combustion liner head sections previously repaired using RR Field Repair Scheme FRS5367/B. Use paragraphs 3.A.(1) through 3.A.(5) of the...
75 FR 63727 - Airworthiness Directives; Rolls-Royce plc (RR) RB211-524 Series Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-18
... models that have not been repaired to RR Field Repair Scheme FRS5367/B, and A mandatory terminating... Repaired Using RR Field Repair Scheme FRS5367/B (h) If the combustion liner head section was previously repaired using RR Field Repair Scheme FRS5367/B, do the following: (1) Borescope-inspect combustion liner...
On parasupersymmetric oscillators and relativistic vector mesons in constant magnetic fields
NASA Technical Reports Server (NTRS)
Debergh, Nathalie; Beckers, Jules
1995-01-01
Johnson-Lippmann considerations on oscillators and their connection with the minimal coupling schemes are visited in order to introduce a new Sakata-Taketani equation describing vector mesons in interaction with a constant magnetic field. This new proposal, based on a specific parasupersymmetric oscillator-like system, is characterized by real energies as opposed to previously pointed out relativistic equations corresponding to this interacting context.
NASA Technical Reports Server (NTRS)
Williams, D. A.; Greeley, R.; Neukum, G.; Wagner, R.
1993-01-01
New visible and near-infrared multispectral data of the Moon were obtained by the Galileo spacecraft in December, 1990. These data were calibrated with Earth-based spectral observations of the nearside to compare compositional information to previously uncharacterized mare basalts filling craters and basins on the western near side and eastern far side. A Galileo-based spectral classification scheme, modified from the Earth-based scheme developed by Pieters, designates the different spectral classifications of mare basalt observed using the 0.41/0.56 micron reflectance ratio (titanium content), 0.56 micron reflectance values (albedo), and 0.76/0.99 micron reflectance ratio (absorption due to Fe(2+) in mafic minerals and glass). In addition, age determinations from crater counts and results of a linear spectral mixing model were used to assess the volcanic histories of specific regions of interest. These interpreted histories were related to models of mare basalt petrogenesis in an attempt to better understand the evolution of lunar volcanism.
Wang, Jing; Xuan, Yi; Qi, Minghao; Huang, Haiyang; Li, You; Li, Ming; Chen, Xin; Sheng, Zhen; Wu, Aimin; Li, Wei; Wang, Xi; Zou, Shichang; Gan, Fuwan
2015-05-01
A broadband and fabrication-tolerant on-chip scalable mode-division multiplexing (MDM) scheme based on mode-evolution counter-tapered couplers is designed and experimentally demonstrated on a silicon-on-insulator (SOI) platform. Due to the broadband advantage offered by mode evolution, the two-mode MDM link exhibits a very large, -1 dB bandwidth of >180 nm, which is considerably larger than most of the previously reported MDM links whether they are based on mode-interference or evolution. In addition, the performance metrics remain stable for large-device width deviations from the designed valued by -60 nm to 40 nm, and for temperature variations from -25°C to 75°C. This MDM scheme can be readily extended to higher-order mode multiplexing and a three-mode MDM link is measured with less than -10 dB crosstalk from 1.46 to 1.64 μm wavelength range.
One-dimensional GIS-based model compared with a two-dimensional model in urban floods simulation.
Lhomme, J; Bouvier, C; Mignot, E; Paquier, A
2006-01-01
A GIS-based one-dimensional flood simulation model is presented and applied to the centre of the city of Nîmes (Gard, France), for mapping flow depths or velocities in the streets network. The geometry of the one-dimensional elements is derived from the Digital Elevation Model (DEM). The flow is routed from one element to the next using the kinematic wave approximation. At the crossroads, the flows in the downstream branches are computed using a conceptual scheme. This scheme was previously designed to fit Y-shaped pipes junctions, and has been modified here to fit X-shaped crossroads. The results were compared with the results of a two-dimensional hydrodynamic model based on the full shallow water equations. The comparison shows that good agreements can be found in the steepest streets of the study zone, but differences may be important in the other streets. Some reasons that can explain the differences between the two models are given and some research possibilities are proposed.
Zong, Guo; Wang, Ahong; Wang, Lu; Liang, Guohua; Gu, Minghong; Sang, Tao; Han, Bin
2012-07-20
1000-Grain weight and spikelet number per panicle are two important components for rice grain yield. In our previous study, eight quantitative trait loci (QTLs) conferring spikelet number per panicle and 1000-grain weight were mapped through sequencing-based genotyping of 150 rice recombinant inbred lines (RILs). In this study, we validated the effects of four QTLs from Nipponbare using chromosome segment substitution lines (CSSLs), and pyramided eight grain yield related QTLs. The new lines containing the eight QTLs with positive effects showed increased panicle and spikelet size as compared with the parent variety 93-11. We further proposed a novel pyramid breeding scheme based on marker-assistant and phenotype selection (MAPS). This scheme allowed pyramiding of as many as 24 QTLs at a single hybridization without massive cross work. This study provided insights into the molecular basis of rice grain yield for direct wealth for high-yielding rice breeding. Copyright © 2012. Published by Elsevier Ltd.
Robust Stabilization of T-S Fuzzy Stochastic Descriptor Systems via Integral Sliding Modes.
Li, Jinghao; Zhang, Qingling; Yan, Xing-Gang; Spurgeon, Sarah K
2017-09-19
This paper addresses the robust stabilization problem for T-S fuzzy stochastic descriptor systems using an integral sliding mode control paradigm. A classical integral sliding mode control scheme and a nonparallel distributed compensation (Non-PDC) integral sliding mode control scheme are presented. It is shown that two restrictive assumptions previously adopted developing sliding mode controllers for Takagi-Sugeno (T-S) fuzzy stochastic systems are not required with the proposed framework. A unified framework for sliding mode control of T-S fuzzy systems is formulated. The proposed Non-PDC integral sliding mode control scheme encompasses existing schemes when the previously imposed assumptions hold. Stability of the sliding motion is analyzed and the sliding mode controller is parameterized in terms of the solutions of a set of linear matrix inequalities which facilitates design. The methodology is applied to an inverted pendulum model to validate the effectiveness of the results presented.
Strained layer relaxation effect on current crowding and efficiency improvement of GaN based LED
NASA Astrophysics Data System (ADS)
Aurongzeb, Deeder
2012-02-01
Efficiency droop effect of GaN based LED at high power and high temperature is addressed by several groups based on career delocalization and photon recycling effect(radiative recombination). We extend the previous droop models to optical loss parameters. We correlate stained layer relaxation at high temperature and high current density to carrier delocalization. We propose a third order model and show that Shockley-Hall-Read and Auger recombination effect is not enough to account for the efficiency loss. Several strained layer modification scheme is proposed based on the model.
NASA Astrophysics Data System (ADS)
Semplice, Matteo; Loubère, Raphaël
2018-02-01
In this paper we propose a third order accurate finite volume scheme based on a posteriori limiting of polynomial reconstructions within an Adaptive-Mesh-Refinement (AMR) simulation code for hydrodynamics equations in 2D. The a posteriori limiting is based on the detection of problematic cells on a so-called candidate solution computed at each stage of a third order Runge-Kutta scheme. Such detection may include different properties, derived from physics, such as positivity, from numerics, such as a non-oscillatory behavior, or from computer requirements such as the absence of NaN's. Troubled cell values are discarded and re-computed starting again from the previous time-step using a more dissipative scheme but only locally, close to these cells. By locally decrementing the degree of the polynomial reconstructions from 2 to 0 we switch from a third-order to a first-order accurate but more stable scheme. The entropy indicator sensor is used to refine/coarsen the mesh. This sensor is also employed in an a posteriori manner because if some refinement is needed at the end of a time step, then the current time-step is recomputed with the refined mesh, but only locally, close to the new cells. We show on a large set of numerical tests that this a posteriori limiting procedure coupled with the entropy-based AMR technology can maintain not only optimal accuracy on smooth flows but also stability on discontinuous profiles such as shock waves, contacts, interfaces, etc. Moreover numerical evidences show that this approach is at least comparable in terms of accuracy and cost to a more classical CWENO approach within the same AMR context.
Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest
NASA Astrophysics Data System (ADS)
Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David
2009-02-01
Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.
Arshad, Hamed; Nikooghadam, Morteza
2014-12-01
Nowadays, with comprehensive employment of the internet, healthcare delivery services is provided remotely by telecare medicine information systems (TMISs). A secure mechanism for authentication and key agreement is one of the most important security requirements for TMISs. Recently, Tan proposed a user anonymity preserving three-factor authentication scheme for TMIS. The present paper shows that Tan's scheme is vulnerable to replay attacks and Denial-of-Service attacks. In order to overcome these security flaws, a new and efficient three-factor anonymous authentication and key agreement scheme for TMIS is proposed. Security and performance analysis shows superiority of the proposed scheme in comparison with previously proposed schemes that are related to security of TMISs.
Comparison of two matrix data structures for advanced CSM testbed applications
NASA Technical Reports Server (NTRS)
Regelbrugge, M. E.; Brogan, F. A.; Nour-Omid, B.; Rankin, C. C.; Wright, M. A.
1989-01-01
The first section describes data storage schemes presently used by the Computational Structural Mechanics (CSM) testbed sparse matrix facilities and similar skyline (profile) matrix facilities. The second section contains a discussion of certain features required for the implementation of particular advanced CSM algorithms, and how these features might be incorporated into the data storage schemes described previously. The third section presents recommendations, based on the discussions of the prior sections, for directing future CSM testbed development to provide necessary matrix facilities for advanced algorithm implementation and use. The objective is to lend insight into the matrix structures discussed and to help explain the process of evaluating alternative matrix data structures and utilities for subsequent use in the CSM testbed.
NASA Technical Reports Server (NTRS)
Simon, Marvin; Valles, Esteban; Jones, Christopher
2008-01-01
This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.
GPU accelerated FDTD solver and its application in MRI.
Chi, J; Liu, F; Jin, J; Mason, D G; Crozier, S
2010-01-01
The finite difference time domain (FDTD) method is a popular technique for computational electromagnetics (CEM). The large computational power often required, however, has been a limiting factor for its applications. In this paper, we will present a graphics processing unit (GPU)-based parallel FDTD solver and its successful application to the investigation of a novel B1 shimming scheme for high-field magnetic resonance imaging (MRI). The optimized shimming scheme exhibits considerably improved transmit B(1) profiles. The GPU implementation dramatically shortened the runtime of FDTD simulation of electromagnetic field compared with its CPU counterpart. The acceleration in runtime has made such investigation possible, and will pave the way for other studies of large-scale computational electromagnetic problems in modern MRI which were previously impractical.
Wavelet Representation of the Corneal Pulse for Detecting Ocular Dicrotism
Melcer, Tomasz; Danielewska, Monika E.; Iskander, D. Robert
2015-01-01
Purpose To develop a reliable and powerful method for detecting the ocular dicrotism from non-invasively acquired signals of corneal pulse without the knowledge of the underlying cardiopulmonary information present in signals of ocular blood pulse and the electrical heart activity. Methods Retrospective data from a study on glaucomatous and age-related changes in corneal pulsation [PLOS ONE 9(7),(2014):e102814] involving 261 subjects was used. Continuous wavelet representation of the signal derivative of the corneal pulse was considered with a complex Gaussian derivative function chosen as mother wavelet. Gray-level Co-occurrence Matrix has been applied to the image (heat-maps) of CWT to yield a set of parameters that can be used to devise the ocular dicrotic pulse detection schemes based on the Conditional Inference Tree and the Random Forest models. The detection scheme was first tested on synthetic signals resembling those of a dicrotic and a non-dicrotic ocular pulse before being used on all 261 real recordings. Results A detection scheme based on a single feature of the Continuous Wavelet Transform of the corneal pulse signal resulted in a low detection rate. Conglomeration of a set of features based on measures of texture (homogeneity, correlation, energy, and contrast) resulted in a high detection rate reaching 93%. Conclusion It is possible to reliably detect a dicrotic ocular pulse from the signals of corneal pulsation without the need of acquiring additional signals related to heart activity, which was the previous state-of-the-art. The proposed scheme can be applied to other non-stationary biomedical signals related to ocular dynamics. PMID:25906236
NASA Astrophysics Data System (ADS)
Felfelani, F.; Pokhrel, Y. N.
2017-12-01
In this study, we use in-situ observations and satellite data of soil moisture and groundwater to improve irrigation and groundwater parameterizations in the version 4.5 of the Community Land Model (CLM). The irrigation application trigger, which is based on the soil moisture deficit mechanism, is enhanced by integrating soil moisture observations and the data from the Soil Moisture Active Passive (SMAP) mission which is available since 2015. Further, we incorporate different irrigation application mechanisms based on schemes used in various other land surface models (LSMs) and carry out a sensitivity analysis using point simulations at two different irrigated sites in Mead, Nebraska where data from the AmeriFlux observational network are available. We then conduct regional simulations over the entire High Plains region and evaluate model results with the available irrigation water use data at the county-scale. Finally, we present results of groundwater simulations by implementing a simple pumping scheme based on our previous studies. Results from the implementation of current irrigation parameterization used in various LSMs show relatively large difference in vertical soil moisture content profile (e.g., 0.2 mm3/mm3) at point scale which is mostly decreased when averaged over relatively large regions (e.g., 0.04 mm3/mm3 in the High Plains region). It is found that original irrigation module in CLM 4.5 tends to overestimate the soil moisture content compared to both point observations and SMAP, and the results from the improved scheme linked with the groundwater pumping scheme show better agreement with the observations.
Entropy-stable summation-by-parts discretization of the Euler equations on general curved elements
NASA Astrophysics Data System (ADS)
Crean, Jared; Hicken, Jason E.; Del Rey Fernández, David C.; Zingg, David W.; Carpenter, Mark H.
2018-03-01
We present and analyze an entropy-stable semi-discretization of the Euler equations based on high-order summation-by-parts (SBP) operators. In particular, we consider general multidimensional SBP elements, building on and generalizing previous work with tensor-product discretizations. In the absence of dissipation, we prove that the semi-discrete scheme conserves entropy; significantly, this proof of nonlinear L2 stability does not rely on integral exactness. Furthermore, interior penalties can be incorporated into the discretization to ensure that the total (mathematical) entropy decreases monotonically, producing an entropy-stable scheme. SBP discretizations with curved elements remain accurate, conservative, and entropy stable provided the mapping Jacobian satisfies the discrete metric invariants; polynomial mappings at most one degree higher than the SBP operators automatically satisfy the metric invariants in two dimensions. In three-dimensions, we describe an elementwise optimization that leads to suitable Jacobians in the case of polynomial mappings. The properties of the semi-discrete scheme are verified and investigated using numerical experiments.
A novel chaotic image encryption scheme using DNA sequence operations
NASA Astrophysics Data System (ADS)
Wang, Xing-Yuan; Zhang, Ying-Qian; Bao, Xue-Mei
2015-10-01
In this paper, we propose a novel image encryption scheme based on DNA (Deoxyribonucleic acid) sequence operations and chaotic system. Firstly, we perform bitwise exclusive OR operation on the pixels of the plain image using the pseudorandom sequences produced by the spatiotemporal chaos system, i.e., CML (coupled map lattice). Secondly, a DNA matrix is obtained by encoding the confused image using a kind of DNA encoding rule. Then we generate the new initial conditions of the CML according to this DNA matrix and the previous initial conditions, which can make the encryption result closely depend on every pixel of the plain image. Thirdly, the rows and columns of the DNA matrix are permuted. Then, the permuted DNA matrix is confused once again. At last, after decoding the confused DNA matrix using a kind of DNA decoding rule, we obtain the ciphered image. Experimental results and theoretical analysis show that the scheme is able to resist various attacks, so it has extraordinarily high security.
NASA Technical Reports Server (NTRS)
Nowottnick, E.
2007-01-01
During August 2006, the NASA African Multidisciplinary Analyses Mission (NAMMA) field experiment was conducted to characterize the structure of African Easterly Waves and their evolution into tropical storms. Mineral dust aerosols affect tropical storm development, although their exact role remains to be understood. To better understand the role of dust on tropical cyclogenesis, we have implemented a dust source, transport, and optical model in the NASA Goddard Earth Observing System (GEOS) atmospheric general circulation model and data assimilation system. Our dust source scheme is more physically based scheme than previous incarnations of the model, and we introduce improved dust optical and microphysical processes through inclusion of a detailed microphysical scheme. Here we use A-Train observations from MODIS, OMI, and CALIPSO with NAMMA DC-8 flight data to evaluate the simulated dust distributions and microphysical properties. Our goal is to synthesize the multi-spectral observations from the A-Train sensors to arrive at a consistent set of optical properties for the dust aerosols suitable for direct forcing calculations.
Symmetry-breaking inelastic wave-mixing atomic magnetometry.
Zhou, Feng; Zhu, Chengjie J; Hagley, Edward W; Deng, Lu
2017-12-01
The nonlinear magneto-optical rotation (NMOR) effect has prolific applications ranging from precision mapping of Earth's magnetic field to biomagnetic sensing. Studies on collisional spin relaxation effects have led to ultrahigh magnetic field sensitivities using a single-beam Λ scheme with state-of-the-art magnetic shielding/compensation techniques. However, the NMOR effect in this widely used single-beam Λ scheme is peculiarly small, requiring complex radio-frequency phase-locking protocols. We show the presence of a previously unknown energy symmetry-based nonlinear propagation blockade and demonstrate an optical inelastic wave-mixing NMOR technique that breaks this NMOR blockade, resulting in an NMOR optical signal-to-noise ratio (SNR) enhancement of more than two orders of magnitude never before seen with the single-beam Λ scheme. The large SNR enhancement was achieved simultaneously with a nearly two orders of magnitude reduction in laser power while preserving the magnetic resonance linewidth. This new method may open a myriad of applications ranging from biomagnetic imaging to precision measurement of the magnetic properties of subatomic particles.
Eulerian-Lagrangian Simulations of Transonic Flutter Instabilities
NASA Technical Reports Server (NTRS)
Bendiksen, Oddvar O.
1994-01-01
This paper presents an overview of recent applications of Eulerian-Lagrangian computational schemes in simulating transonic flutter instabilities. This approach, the fluid-structure system is treated as a single continuum dynamics problem, by switching from an Eulerian to a Lagrangian formulation at the fluid-structure boundary. This computational approach effectively eliminates the phase integration errors associated with previous methods, where the fluid and structure are integrated sequentially using different schemes. The formulation is based on Hamilton's Principle in mixed coordinates, and both finite volume and finite element discretization schemes are considered. Results from numerical simulations of transonic flutter instabilities are presented for isolated wings, thin panels, and turbomachinery blades. The results suggest that the method is capable of reproducing the energy exchange between the fluid and the structure with significantly less error than existing methods. Localized flutter modes and panel flutter modes involving traveling waves can also be simulated effectively with no a priori knowledge of the type of instability involved.
Symmetry-breaking inelastic wave-mixing atomic magnetometry
Zhou, Feng; Zhu, Chengjie J.; Hagley, Edward W.; Deng, Lu
2017-01-01
The nonlinear magneto-optical rotation (NMOR) effect has prolific applications ranging from precision mapping of Earth’s magnetic field to biomagnetic sensing. Studies on collisional spin relaxation effects have led to ultrahigh magnetic field sensitivities using a single-beam Λ scheme with state-of-the-art magnetic shielding/compensation techniques. However, the NMOR effect in this widely used single-beam Λ scheme is peculiarly small, requiring complex radio-frequency phase-locking protocols. We show the presence of a previously unknown energy symmetry–based nonlinear propagation blockade and demonstrate an optical inelastic wave-mixing NMOR technique that breaks this NMOR blockade, resulting in an NMOR optical signal-to-noise ratio (SNR) enhancement of more than two orders of magnitude never before seen with the single-beam Λ scheme. The large SNR enhancement was achieved simultaneously with a nearly two orders of magnitude reduction in laser power while preserving the magnetic resonance linewidth. This new method may open a myriad of applications ranging from biomagnetic imaging to precision measurement of the magnetic properties of subatomic particles. PMID:29214217
Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan
2014-08-14
In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.
Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems
Siddiqi, Muhammad Hameed; Lee, Sungyoung; Lee, Young-Koo; Khan, Adil Mehmood; Truc, Phan Tran Ho
2013-01-01
Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER. PMID:24316568
MRI-based treatment planning with pseudo CT generated through atlas registration.
Uh, Jinsoo; Merchant, Thomas E; Li, Yimei; Li, Xingyu; Hua, Chiaho
2014-05-01
To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787-0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%-98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.
MRI-based treatment planning with pseudo CT generated through atlas registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uh, Jinsoo, E-mail: jinsoo.uh@stjude.org; Merchant, Thomas E.; Hua, Chiaho
2014-05-15
Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration ofmore » conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.« less
MRI-based treatment planning with pseudo CT generated through atlas registration
Uh, Jinsoo; Merchant, Thomas E.; Li, Yimei; Li, Xingyu; Hua, Chiaho
2014-01-01
Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs. PMID:24784377
NASA Astrophysics Data System (ADS)
Li, Haifeng; Cui, Guixiang; Zhang, Zhaoshun
2018-04-01
A coupling scheme is proposed for the simulation of microscale flow and dispersion in which both the mesoscale field and small-scale turbulence are specified at the boundary of a microscale model. The small-scale turbulence is obtained individually in the inner and outer layers by the transformation of pre-computed databases, and then combined in a weighted sum. Validation of the results of a flow over a cluster of model buildings shows that the inner- and outer-layer transition height should be located in the roughness sublayer. Both the new scheme and the previous scheme are applied in the simulation of the flow over the central business district of Oklahoma City (a point source during intensive observation period 3 of the Joint Urban 2003 experimental campaign), with results showing that the wind speed is well predicted in the canopy layer. Compared with the previous scheme, the new scheme improves the prediction of the wind direction and turbulent kinetic energy (TKE) in the canopy layer. The flow field influences the scalar plume in two ways, i.e. the averaged flow field determines the advective flux and the TKE field determines the turbulent flux. Thus, the mean, root-mean-square and maximum of the concentration agree better with the observations with the new scheme. These results indicate that the new scheme is an effective means of simulating the complex flow and dispersion in urban canopies.
Referrals to the Glasgow sheriff court liaison scheme since the introduction of referral criteria.
Orr, Eilidh M; Baker, Melanie; Ramsay, Louise
2007-10-01
This study is an audit of a court liaison scheme operating in Glasgow sheriff court. It represents a follow-on of previous work after the introduction of referral criteria to delineate more closely the appropriate population to be seen. Results were compared with the previous audit. The total number of referrals decreased by 66%, however, the proportion with a psychotic illness increased to 33%. A high referral rate of prisoners with addictions continued, although the service was not primarily designed for them. Fewer patients with no psychiatric diagnosis were referred to the scheme. Outcomes were, however, similar with approximately the same admission rate to hospital. The introduction of criteria appears to have reduced the numbers of inappropriate referrals without excluding the population with serious mental disorder. The introduction of referral criteria seems to have been beneficial to the scheme. The scheme has since changed again and so there may be benefit for a further audit to monitor the continuing appropriateness of referrals. The provision of specific interventions targeting prisoners with addictions is also supported by this audit.
NASA Astrophysics Data System (ADS)
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M. N.; Head-Gordon, Teresa; Skylaris, Chris-Kriton
2017-03-01
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.
Seino, Junji; Nakai, Hiromi
2012-10-14
The local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys. 136, 244102 (2012)], which is based on the locality of relativistic effects, has been extended to a four-component Dirac-Coulomb Hamiltonian. In the previous study, the LUT scheme was applied only to a one-particle IODKH Hamiltonian with non-relativistic two-electron Coulomb interaction, termed IODKH/C. The current study extends the LUT scheme to a two-particle IODKH Hamiltonian as well as one-particle one, termed IODKH/IODKH, which has been a real bottleneck in numerical calculation. The LUT scheme with the IODKH/IODKH Hamiltonian was numerically assessed in the diatomic molecules HX and X(2) and hydrogen halide molecules, (HX)(n) (X = F, Cl, Br, and I). The total Hartree-Fock energies calculated by the LUT method agree well with conventional IODKH/IODKH results. The computational cost of the LUT method is reduced drastically compared with that of the conventional method. In addition, the LUT method achieves linear-scaling with respect to the system size and a small prefactor.
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M N; Head-Gordon, Teresa; Skylaris, Chris-Kriton
2017-03-28
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes-in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; ...
2017-03-28
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less
What can we learn from parents about enhancing participation in pharmacovigilance?
Arnott, Janine; Hesselgreaves, Hannah; Nunn, Anthony J; Peak, Matthew; Pirmohamed, Munir; Smyth, Rosalind L; Turner, Mark A; Young, Bridget
2013-01-01
Aims To investigate parents' views and experiences of direct reporting of a suspected ADR in their child. Methods We audio-recorded semi-structured qualitative interviews with parents of children with suspected ADRs. Our sample included parents with (n = 17) and without (n = 27) previous experience of submitting a Yellow Card. Results Parents in both groups described poor awareness of the Yellow Card Scheme. Parents who had participated in the Yellow Card Scheme were generally happy to report their child's ADR via the Scheme and valued the opportunity to report concerns independently of health practitioners. They expressed motivations for reporting that have not previously been described linked to the parental role, including how registering a concern about a medicine helped to resolve uncomfortable feelings about their child's ADR. Parents who had not previously submitted a Yellow Card expressed uncertainty about the legitimacy of their involvement in reporting and doubts about the value of the information that they could provide. Conclusion Promoting wider participation in pharmacovigilance schemes will depend on raising public awareness. Additionally, our findings point to the need to empower lay people to submitting reports and to reassure them about the value of their reports. PMID:22905902
Subglottal Impedance-Based Inverse Filtering of Voiced Sounds Using Neck Surface Acceleration
Zañartu, Matías; Ho, Julio C.; Mehta, Daryush D.; Hillman, Robert E.; Wodicka, George R.
2014-01-01
A model-based inverse filtering scheme is proposed for an accurate, non-invasive estimation of the aerodynamic source of voiced sounds at the glottis. The approach, referred to as subglottal impedance-based inverse filtering (IBIF), takes as input the signal from a lightweight accelerometer placed on the skin over the extrathoracic trachea and yields estimates of glottal airflow and its time derivative, offering important advantages over traditional methods that deal with the supraglottal vocal tract. The proposed scheme is based on mechano-acoustic impedance representations from a physiologically-based transmission line model and a lumped skin surface representation. A subject-specific calibration protocol is used to account for individual adjustments of subglottal impedance parameters and mechanical properties of the skin. Preliminary results for sustained vowels with various voice qualities show that the subglottal IBIF scheme yields comparable estimates with respect to current aerodynamics-based methods of clinical vocal assessment. A mean absolute error of less than 10% was observed for two glottal airflow measures –maximum flow declination rate and amplitude of the modulation component– that have been associated with the pathophysiology of some common voice disorders caused by faulty and/or abusive patterns of vocal behavior (i.e., vocal hyperfunction). The proposed method further advances the ambulatory assessment of vocal function based on the neck acceleration signal, that previously have been limited to the estimation of phonation duration, loudness, and pitch. Subglottal IBIF is also suitable for other ambulatory applications in speech communication, in which further evaluation is underway. PMID:25400531
Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.
2013-01-01
Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203
Improved integral images compression based on multi-view extraction
NASA Astrophysics Data System (ADS)
Dricot, Antoine; Jung, Joel; Cagnazzo, Marco; Pesquet, Béatrice; Dufaux, Frédéric
2016-09-01
Integral imaging is a technology based on plenoptic photography that captures and samples the light-field of a scene through a micro-lens array. It provides views of the scene from several angles and therefore is foreseen as a key technology for future immersive video applications. However, integral images have a large resolution and a structure based on micro-images which is challenging to encode. A compression scheme for integral images based on view extraction has previously been proposed, with average BD-rate gains of 15.7% (up to 31.3%) reported over HEVC when using one single extracted view. As the efficiency of the scheme depends on a tradeoff between the bitrate required to encode the view and the quality of the image reconstructed from the view, it is proposed to increase the number of extracted views. Several configurations are tested with different positions and different number of extracted views. Compression efficiency is increased with average BD-rate gains of 22.2% (up to 31.1%) reported over the HEVC anchor, with a realistic runtime increase.
Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks
Pena, Rodrigo F. O.; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C.; Lindner, Benjamin
2018-01-01
Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks. PMID:29551968
General Conversion for Obtaining Strongly Existentially Unforgeable Signatures
NASA Astrophysics Data System (ADS)
Teranishi, Isamu; Oyama, Takuro; Ogata, Wakaha
We say that a signature scheme is strongly existentially unforgeable (SEU) if no adversary, given message/signature pairs adaptively, can generate a signature on a new message or a new signature on a previously signed message. We propose a general and efficient conversion in the standard model that transforms a secure signature scheme to SEU signature scheme. In order to construct that conversion, we use a chameleon commitment scheme. Here a chameleon commitment scheme is a variant of commitment scheme such that one can change the committed value after publishing the commitment if one knows the secret key. We define the chosen message security notion for the chameleon commitment scheme, and show that the signature scheme transformed by our proposed conversion satisfies the SEU property if the chameleon commitment scheme is chosen message secure. By modifying the proposed conversion, we also give a general and efficient conversion in the random oracle model, that transforms a secure signature scheme into a SEU signature scheme. This second conversion also uses a chameleon commitment scheme but only requires the key only attack security for it.
Controlled quantum perfect teleportation of multiple arbitrary multi-qubit states
NASA Astrophysics Data System (ADS)
Shi, Runhua; Huang, Liusheng; Yang, Wei; Zhong, Hong
2011-12-01
We present an efficient controlled quantum perfect teleportation scheme. In our scheme, multiple senders can teleport multiple arbitrary unknown multi-qubit states to a single receiver via a previously shared entanglement state with the help of one or more controllers. Furthermore, our scheme has a very good performance in the measurement and operation complexity, since it only needs to perform Bell state and single-particle measurements and to apply Controlled-Not gate and other single-particle unitary operations. In addition, compared with traditional schemes, our scheme needs less qubits as the quantum resources and exchanges less classical information, and thus obtains higher communication efficiency.
Event-driven simulations of nonlinear integrate-and-fire neurons.
Tonnelier, Arnaud; Belmabrouk, Hana; Martinez, Dominique
2007-12-01
Event-driven strategies have been used to simulate spiking neural networks exactly. Previous work is limited to linear integrate-and-fire neurons. In this note, we extend event-driven schemes to a class of nonlinear integrate-and-fire models. Results are presented for the quadratic integrate-and-fire model with instantaneous or exponential synaptic currents. Extensions to conductance-based currents and exponential integrate-and-fire neurons are discussed.
NASA Technical Reports Server (NTRS)
Cummings, Kristin A.; Pickering, Kenneth E.; Barth, M.; Bela, M.; Li, Y.; Allen, D.; Bruning, E.; MacGorman, D.; Rutledge, S.; Basarab, B.;
2016-01-01
The focus of this analysis is on lightning-generated nitrogen oxides (LNOx) and their distribution for two thunderstorms observed during the Deep Convective Clouds and Chemistry (DC3) field campaign in May-June 2012. The Weather Research and Forecasting Chemistry (WRF-Chem) model is used to perform cloud-resolved simulations for the May 29-30 Oklahoma severe convection, which contained one supercell, and the June 6-7 Colorado squall line. Aircraft and ground-based observations (e.g., trace gases, lightning and radar) collected during DC3 are used in comparisons against the model-simulated lightning flashes generated by the flash rate parameterization schemes (FRPSs) incorporated into the model, as well as the model-simulated LNOx predicted in the anvil outflow. Newly generated FRPSs based on DC3 radar observations and Lightning Mapping Array data are implemented in the model, along with previously developed schemes from the literature. The results of these analyses will also be compared between storms to investigate which FRPSs were most appropriate for the two types of convection and to examine the variation in the LNOx production. The simulated LNOx results from WRF-Chem will also be compared against other previously studied mid-latitude thunderstorms.
Cortical circuitry implementing graphical models.
Litvak, Shai; Ullman, Shimon
2009-11-01
In this letter, we develop and simulate a large-scale network of spiking neurons that approximates the inference computations performed by graphical models. Unlike previous related schemes, which used sum and product operations in either the log or linear domains, the current model uses an inference scheme based on the sum and maximization operations in the log domain. Simulations show that using these operations, a large-scale circuit, which combines populations of spiking neurons as basic building blocks, is capable of finding close approximations to the full mathematical computations performed by graphical models within a few hundred milliseconds. The circuit is general in the sense that it can be wired for any graph structure, it supports multistate variables, and it uses standard leaky integrate-and-fire neuronal units. Following previous work, which proposed relations between graphical models and the large-scale cortical anatomy, we focus on the cortical microcircuitry and propose how anatomical and physiological aspects of the local circuitry may map onto elements of the graphical model implementation. We discuss in particular the roles of three major types of inhibitory neurons (small fast-spiking basket cells, large layer 2/3 basket cells, and double-bouquet neurons), subpopulations of strongly interconnected neurons with their unique connectivity patterns in different cortical layers, and the possible role of minicolumns in the realization of the population-based maximum operation.
Symmetric autocompensating quantum key distribution
NASA Astrophysics Data System (ADS)
Walton, Zachary D.; Sergienko, Alexander V.; Levitin, Lev B.; Saleh, Bahaa E. A.; Teich, Malvin C.
2004-08-01
We present quantum key distribution schemes which are autocompensating (require no alignment) and symmetric (Alice and Bob receive photons from a central source) for both polarization and time-bin qubits. The primary benefit of the symmetric configuration is that both Alice and Bob may have passive setups (neither Alice nor Bob is required to make active changes for each run of the protocol). We show that both the polarization and the time-bin schemes may be implemented with existing technology. The new schemes are related to previously described schemes by the concept of advanced waves.
A Theoretical Analysis of a New Polarimetric Optical Scheme for Glucose Sensing in the Human Eye
NASA Technical Reports Server (NTRS)
Rovati, Luigi L.; Boeckle, Stefan; Ansari, Rafat R.; Salzman, Jack A. (Technical Monitor)
2002-01-01
The challenging task of in vivo polarimetric glucose sensing is the identification and selection of a scheme to optically access the aqueous humor of the human eye. In this short communication an earlier approach of Cote et al. is theoretically compared with our new optical scheme. Simulations of the new scheme using the eye model of Navarro, suggest that the new optical geometry can overcome the limitations of the previous approach for in vivo measurements of glucose in a human eye.
You, Siming; Wang, Wei; Dai, Yanjun; Tong, Yen Wah; Wang, Chi-Hwa
2016-10-01
The compositions of food wastes and their co-gasification producer gas were compared with the existing data of sewage sludge. Results showed that food wastes are more favorable than sewage sludge for co-gasification based on residue generation and energy output. Two decentralized gasification-based schemes were proposed to dispose of the sewage sludge and food wastes in Singapore. Monte Carlo simulation-based cost-benefit analysis was conducted to compare the proposed schemes with the existing incineration-based scheme. It was found that the gasification-based schemes are financially superior to the incineration-based scheme based on the data of net present value (NPV), benefit-cost ratio (BCR), and internal rate of return (IRR). Sensitivity analysis was conducted to suggest effective measures to improve the economics of the schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Xu, Qian; Tan, Chengxiang; Fan, Zhijie; Zhu, Wenye; Xiao, Ya; Cheng, Fujia
2018-05-17
Nowadays, fog computing provides computation, storage, and application services to end users in the Internet of Things. One of the major concerns in fog computing systems is how fine-grained access control can be imposed. As a logical combination of attribute-based encryption and attribute-based signature, Attribute-based Signcryption (ABSC) can provide confidentiality and anonymous authentication for sensitive data and is more efficient than traditional "encrypt-then-sign" or "sign-then-encrypt" strategy. Thus, ABSC is suitable for fine-grained access control in a semi-trusted cloud environment and is gaining more and more attention recently. However, in many existing ABSC systems, the computation cost required for the end users in signcryption and designcryption is linear with the complexity of signing and encryption access policy. Moreover, only a single authority that is responsible for attribute management and key generation exists in the previous proposed ABSC schemes, whereas in reality, mostly, different authorities monitor different attributes of the user. In this paper, we propose OMDAC-ABSC, a novel data access control scheme based on Ciphertext-Policy ABSC, to provide data confidentiality, fine-grained control, and anonymous authentication in a multi-authority fog computing system. The signcryption and designcryption overhead for the user is significantly reduced by outsourcing the undesirable computation operations to fog nodes. The proposed scheme is proven to be secure in the standard model and can provide attribute revocation and public verifiability. The security analysis, asymptotic complexity comparison, and implementation results indicate that our construction can balance the security goals with practical efficiency in computation.
Four-dimensional MRI using an internal respiratory surrogate derived by dimensionality reduction
NASA Astrophysics Data System (ADS)
Uh, Jinsoo; Ayaz Khan, M.; Hua, Chiaho
2016-11-01
This study aimed to develop a practical and accurate 4-dimensional (4D) magnetic resonance imaging (MRI) method using a non-navigator, image-based internal respiratory surrogate derived by dimensionality reduction (DR). The use of DR has been previously suggested but not implemented for reconstructing 4D MRI, despite its practical advantages. We compared multiple image-acquisition schemes and refined a retrospective-sorting process to optimally implement a DR-derived surrogate. The comparison included an unconventional scheme that acquires paired slices alternately to mitigate the internal surrogate’s dependency on a specific slice location. We introduced ‘target-oriented sorting’, as opposed to conventional binning, to quantify the coherence in retrospectively sorted images, thereby determining the minimal scan time needed for sufficient coherence. This study focused on evaluating the proposed method using digital phantoms which provided unequivocal gold standard. The evaluation indicated that the DR-based respiratory surrogate is highly accurate: the error in amplitude percentile of the surrogate signal was less than 5% with the optimal scheme. Acquiring alternating paired slices was superior to the conventional scheme of acquiring individual slices; the advantage of the unconventional scheme was more pronounced when a substantial phase shift occurred across slice locations. The analysis of coherence across sorted images confirmed the advantage of higher sampling efficiencies in non-navigator respiratory surrogates. We determined that a scan time of 20 s per imaging slice was sufficient to achieve a mean coherence error of less than 1% for the tested respiratory patterns. The clinical applicability of the proposed 4D MRI has been demonstrated with volunteers and patients. The diaphragm motion in 4D MRI was consistent with that in dynamic 2D imaging which was regarded as the gold standard (difference within 1.8 mm on average).
Analytic redundancy management for SCOLE
NASA Technical Reports Server (NTRS)
Montgomery, Raymond C.
1988-01-01
The objective of this work is to develop a practical sensor analytic redundancy management scheme for flexible spacecraft and to demonstrate it using the SCOLE experimental apparatus. The particular scheme to be used is taken from previous work on the Grid apparatus by Williams and Montgomery.
Heating and flooding: A unified approach for rapid generation of free energy surfaces
NASA Astrophysics Data System (ADS)
Chen, Ming; Cuendet, Michel A.; Tuckerman, Mark E.
2012-07-01
We propose a general framework for the efficient sampling of conformational equilibria in complex systems and the generation of associated free energy hypersurfaces in terms of a set of collective variables. The method is a strategic synthesis of the adiabatic free energy dynamics approach, previously introduced by us and others, and existing schemes using Gaussian-based adaptive bias potentials to disfavor previously visited regions. In addition, we suggest sampling the thermodynamic force instead of the probability density to reconstruct the free energy hypersurface. All these elements are combined into a robust extended phase-space formalism that can be easily incorporated into existing molecular dynamics packages. The unified scheme is shown to outperform both metadynamics and adiabatic free energy dynamics in generating two-dimensional free energy surfaces for several example cases including the alanine dipeptide in the gas and aqueous phases and the met-enkephalin oligopeptide. In addition, the method can efficiently generate higher dimensional free energy landscapes, which we demonstrate by calculating a four-dimensional surface in the Ramachandran angles of the gas-phase alanine tripeptide.
Survey of Header Compression Techniques
NASA Technical Reports Server (NTRS)
Ishac, Joseph
2001-01-01
This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves compression schemes which provide better tolerances in conditions with a high BER.
In-TFT-Array-Process Micro Defect Inspection Using Nonlinear Principal Component Analysis
Liu, Yi-Hung; Wang, Chi-Kai; Ting, Yung; Lin, Wei-Zhi; Kang, Zhi-Hao; Chen, Ching-Shun; Hwang, Jih-Shang
2009-01-01
Defect inspection plays a critical role in thin film transistor liquid crystal display (TFT-LCD) manufacture, and has received much attention in the field of automatic optical inspection (AOI). Previously, most focus was put on the problems of macro-scale Mura-defect detection in cell process, but it has recently been found that the defects which substantially influence the yield rate of LCD panels are actually those in the TFT array process, which is the first process in TFT-LCD manufacturing. Defect inspection in TFT array process is therefore considered a difficult task. This paper presents a novel inspection scheme based on kernel principal component analysis (KPCA) algorithm, which is a nonlinear version of the well-known PCA algorithm. The inspection scheme can not only detect the defects from the images captured from the surface of LCD panels, but also recognize the types of the detected defects automatically. Results, based on real images provided by a LCD manufacturer in Taiwan, indicate that the KPCA-based defect inspection scheme is able to achieve a defect detection rate of over 99% and a high defect classification rate of over 96% when the imbalanced support vector machine (ISVM) with 2-norm soft margin is employed as the classifier. More importantly, the inspection time is less than 1 s per input image. PMID:20057957
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2008-09-15
We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.
Site selection model for new metro stations based on land use
NASA Astrophysics Data System (ADS)
Zhang, Nan; Chen, Xuewu
2015-12-01
Since the construction of metro system generally lags behind the development of urban land use, sites of metro stations should adapt to their surrounding situations, which was rarely discussed by previous research on station layout. This paper proposes a new site selection model to find the best location for a metro station, establishing the indicator system based on land use and combining AHP with entropy weight method to obtain the schemes' ranking. The feasibility and efficiency of this model has been validated by evaluating Nanjing Shengtai Road station and other potential sites.
Using new aggregation operators in rule-based intelligent control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Chen, Yung-Yaw; Yager, Ronald R.
1990-01-01
A new aggregation operator is applied in the design of an approximate reasoning-based controller. The ordered weighted averaging (OWA) operator has the property of lying between the And function and the Or function used in previous fuzzy set reasoning systems. It is shown here that, by applying OWA operators, more generalized types of control rules, which may include linguistic quantifiers such as Many and Most, can be developed. The new aggregation operators, as tested in a cart-pole balancing control problem, illustrate improved performance when compared with existing fuzzy control aggregation schemes.
Wheeler, M J; Mason, R H; Steunenberg, K; Wagstaff, M; Chou, C; Bertram, A K
2015-05-14
Ice nucleation on mineral dust particles is known to be an important process in the atmosphere. To accurately implement ice nucleation on mineral dust particles in atmospheric simulations, a suitable theory or scheme is desirable to describe laboratory freezing data in atmospheric models. In the following, we investigated ice nucleation by supermicron mineral dust particles [kaolinite and Arizona Test Dust (ATD)] in the immersion mode. The median freezing temperature for ATD was measured to be approximately -30 °C compared with approximately -36 °C for kaolinite. The freezing results were then used to test four different schemes previously used to describe ice nucleation in atmospheric models. In terms of ability to fit the data (quantified by calculating the reduced chi-squared values), the following order was found for ATD (from best to worst): active site, pdf-α, deterministic, single-α. For kaolinite, the following order was found (from best to worst): active site, deterministic, pdf-α, single-α. The variation in the predicted median freezing temperature per decade change in the cooling rate for each of the schemes was also compared with experimental results from other studies. The deterministic model predicts the median freezing temperature to be independent of cooling rate, while experimental results show a weak dependence on cooling rate. The single-α, pdf-α, and active site schemes all agree with the experimental results within roughly a factor of 2. On the basis of our results and previous results where different schemes were tested, the active site scheme is recommended for describing the freezing of ATD and kaolinite particles. We also used our ice nucleation results to determine the ice nucleation active site (INAS) density for the supermicron dust particles tested. Using the data, we show that the INAS densities of supermicron kaolinite and ATD particles studied here are smaller than the INAS densities of submicron kaolinite and ATD particles previously reported in the literature.
New Authentication Scheme for Wireless Body Area Networks Using the Bilinear Pairing.
Wang, Chunzhi; Zhang, Yanmei
2015-11-01
Due to the development of information technologies and network technologies, healthcare systems have been employed in many countries. As an important part of healthcare systems, the wireless body area network (WBAN) could bring convenience to both patients and physicians because it could help physicians to monitor patients' physiological values remotely. It is essential to ensure secure communication in WBANs because patients' physiological values are very sensitive. Recently, Liu et al. proposed an efficient authentication scheme for WBANs. Unfortunately, Zhao pointed out that their scheme suffered from the stolen verifier-table attack. To improve security and efficiency, Zhao proposed an anonymous authentication scheme for WBANs. However, Zhao's scheme cannot provide real anonymity because the users' pseudo identities are constant value and the attack could tract the users. In this paper, we propose a new anonymous authentication scheme for WBANs. Security analysis shows that the proposed scheme could overcome weaknesses in previous scheme. We also use the BAN logic to demonstrate the security of the proposed scheme.
Report on Pairing-based Cryptography.
Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily
2015-01-01
This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST's position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed.
Report on Pairing-based Cryptography
Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily
2015-01-01
This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST’s position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed. PMID:26958435
An approximate Riemann solver for magnetohydrodynamics (that works in more than one dimension)
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.
1994-01-01
An approximate Riemann solver is developed for the governing equations of ideal magnetohydrodynamics (MHD). The Riemann solver has an eight-wave structure, where seven of the waves are those used in previous work on upwind schemes for MHD, and the eighth wave is related to the divergence of the magnetic field. The structure of the eighth wave is not immediately obvious from the governing equations as they are usually written, but arises from a modification of the equations that is presented in this paper. The addition of the eighth wave allows multidimensional MHD problems to be solved without the use of staggered grids or a projection scheme, one or the other of which was necessary in previous work on upwind schemes for MHD. A test problem made up of a shock tube with rotated initial conditions is solved to show that the two-dimensional code yields answers consistent with the one-dimensional methods developed previously.
NASA Astrophysics Data System (ADS)
Bessler, Wolfgang G.; Schulz, Christof; Lee, Tonghun; Jeffries, Jay B.; Hanson, Ronald K.
2002-06-01
Three different high-pressure flame measurement strategies for NO laser-induced fluorescence (LIF) with A-X (0,0) excitation have been studied previously with computational simulations and experiments in flames up to 15 bars. Interference from O2 LIF is a significant problem in lean flames for NO LIF measurements, and pressure broadening and quenching lead to increased interference with increased pressure. We investigate the NO LIF signal strength, interference by hot molecular oxygen, and temperature dependence of the three previous schemes and for two newly chosen excitation schemes with wavelength-resolved LIF measurements in premixed methane and air flames at pressures between 1 and 60 bars and a range of fuel /air ratios. In slightly lean flames with an equivalence ratio of 0.83 at 60 bars, the contribution of O2 LIF to the NO LIF signal varies between 8% and 29% for the previous schemes. The O2 interference is best suppressed with excitation at 226.03 nm.
NASA Astrophysics Data System (ADS)
Navas-Montilla, A.; Murillo, J.
2016-07-01
In this work, an arbitrary order HLL-type numerical scheme is constructed using the flux-ADER methodology. The proposed scheme is based on an augmented Derivative Riemann solver that was used for the first time in Navas-Montilla and Murillo (2015) [1]. Such solver, hereafter referred to as Flux-Source (FS) solver, was conceived as a high order extension of the augmented Roe solver and led to the generation of a novel numerical scheme called AR-ADER scheme. Here, we provide a general definition of the FS solver independently of the Riemann solver used in it. Moreover, a simplified version of the solver, referred to as Linearized-Flux-Source (LFS) solver, is presented. This novel version of the FS solver allows to compute the solution without requiring reconstruction of derivatives of the fluxes, nevertheless some drawbacks are evidenced. In contrast to other previously defined Derivative Riemann solvers, the proposed FS and LFS solvers take into account the presence of the source term in the resolution of the Derivative Riemann Problem (DRP), which is of particular interest when dealing with geometric source terms. When applied to the shallow water equations, the proposed HLLS-ADER and AR-ADER schemes can be constructed to fulfill the exactly well-balanced property, showing that an arbitrary quadrature of the integral of the source inside the cell does not ensure energy balanced solutions. As a result of this work, energy balanced flux-ADER schemes that provide the exact solution for steady cases and that converge to the exact solution with arbitrary order for transient cases are constructed.
Free-Space Quantum Signatures Using Heterodyne Measurements
NASA Astrophysics Data System (ADS)
Croal, Callum; Peuntinger, Christian; Heim, Bettina; Khan, Imran; Marquardt, Christoph; Leuchs, Gerd; Wallden, Petros; Andersson, Erika; Korolkova, Natalia
2016-09-01
Digital signatures guarantee the authorship of electronic communications. Currently used "classical" signature schemes rely on unproven computational assumptions for security, while quantum signatures rely only on the laws of quantum mechanics to sign a classical message. Previous quantum signature schemes have used unambiguous quantum measurements. Such measurements, however, sometimes give no result, reducing the efficiency of the protocol. Here, we instead use heterodyne detection, which always gives a result, although there is always some uncertainty. We experimentally demonstrate feasibility in a real environment by distributing signature states through a noisy 1.6 km free-space channel. Our results show that continuous-variable heterodyne detection improves the signature rate for this type of scheme and therefore represents an interesting direction in the search for practical quantum signature schemes. For transmission values ranging from 100% to 10%, but otherwise assuming an ideal implementation with no other imperfections, the signature length is shorter by a factor of 2 to 10. As compared with previous relevant experimental realizations, the signature length in this implementation is several orders of magnitude shorter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chi, Y; Li, Y; Tian, Z
2015-06-15
Purpose: Pencil-beam or superposition-convolution type dose calculation algorithms are routinely used in inverse plan optimization for intensity modulated radiation therapy (IMRT). However, due to their limited accuracy in some challenging cases, e.g. lung, the resulting dose may lose its optimality after being recomputed using an accurate algorithm, e.g. Monte Carlo (MC). It is the objective of this study to evaluate the feasibility and advantages of a new method to include MC in the treatment planning process. Methods: We developed a scheme to iteratively perform MC-based beamlet dose calculations and plan optimization. In the MC stage, a GPU-based dose engine wasmore » used and the particle number sampled from a beamlet was proportional to its optimized fluence from the previous step. We tested this scheme in four lung cancer IMRT cases. For each case, the original plan dose, plan dose re-computed by MC, and dose optimized by our scheme were obtained. Clinically relevant dosimetric quantities in these three plans were compared. Results: Although the original plan achieved a satisfactory PDV dose coverage, after re-computing doses using MC method, it was found that the PTV D95% were reduced by 4.60%–6.67%. After re-optimizing these cases with our scheme, the PTV coverage was improved to the same level as in the original plan, while the critical OAR coverages were maintained to clinically acceptable levels. Regarding the computation time, it took on average 144 sec per case using only one GPU card, including both MC-based beamlet dose calculation and treatment plan optimization. Conclusion: The achieved dosimetric gains and high computational efficiency indicate the feasibility and advantages of the proposed MC-based IMRT optimization method. Comprehensive validations in more patient cases are in progress.« less
Factorizable Schemes for the Equations of Fluid Flow
NASA Technical Reports Server (NTRS)
Sidilkover, David
1999-01-01
We present an upwind high-resolution factorizable (UHF) discrete scheme for the compressible Euler equations that allows to distinguish between full-potential and advection factors at the discrete level. The scheme approximates equations in their general conservative form and is related to the family of genuinely multidimensional upwind schemes developed previously and demonstrated to have good shock-capturing capabilities. A unique property of this scheme is that in addition to the aforementioned features it is also factorizable, i.e., it allows to distinguish between full-potential and advection factors at the discrete level. The latter property facilitates the construction of optimally efficient multigrid solvers. This is done through a relaxation procedure that utilizes the factorizability property.
NASA Astrophysics Data System (ADS)
Rashvand, Taghi
2016-11-01
We present a new scheme for quantum teleportation that one can teleport an unknown state via a non-maximally entangled channel with certainly, using an auxiliary system. In this scheme depending on the state of the auxiliary system, one can find a class of orthogonal vectors set as a basis which by performing von Neumann measurement in each element of this class Alice can teleport an unknown state with unit fidelity and unit probability. A comparison of our scheme with some previous schemes is given and we will see that our scheme has advantages that the others do not.
NASA Astrophysics Data System (ADS)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
Mesh quality oriented 3D geometric vascular modeling based on parallel transport frame.
Guo, Jixiang; Li, Shun; Chui, Yim Pan; Qin, Jing; Heng, Pheng Ann
2013-08-01
While a number of methods have been proposed to reconstruct geometrically and topologically accurate 3D vascular models from medical images, little attention has been paid to constantly maintain high mesh quality of these models during the reconstruction procedure, which is essential for many subsequent applications such as simulation-based surgical training and planning. We propose a set of methods to bridge this gap based on parallel transport frame. An improved bifurcation modeling method and two novel trifurcation modeling methods are developed based on 3D Bézier curve segments in order to ensure the continuous surface transition at furcations. In addition, a frame blending scheme is implemented to solve the twisting problem caused by frame mismatch of two successive furcations. A curvature based adaptive sampling scheme combined with a mesh quality guided frame tilting algorithm is developed to construct an evenly distributed, non-concave and self-intersection free surface mesh for vessels with distinct radius and high curvature. Extensive experiments demonstrate that our methodology can generate vascular models with better mesh quality than previous methods in terms of surface mesh quality criteria. Copyright © 2013 Elsevier Ltd. All rights reserved.
Li, Ruiying; Liu, Xiaoxi; Xie, Wei; Huang, Ning
2014-12-10
Sensor-deployment-based lifetime optimization is one of the most effective methods used to prolong the lifetime of Wireless Sensor Network (WSN) by reducing the distance-sensitive energy consumption. In this paper, data retransmission, a major consumption factor that is usually neglected in the previous work, is considered. For a homogeneous WSN, monitoring a circular target area with a centered base station, a sensor deployment model based on regular hexagonal grids is analyzed. To maximize the WSN lifetime, optimization models for both uniform and non-uniform deployment schemes are proposed by constraining on coverage, connectivity and success transmission rate. Based on the data transmission analysis in a data gathering cycle, the WSN lifetime in the model can be obtained through quantifying the energy consumption at each sensor location. The results of case studies show that it is meaningful to consider data retransmission in the lifetime optimization. In particular, our investigations indicate that, with the same lifetime requirement, the number of sensors needed in a non-uniform topology is much less than that in a uniform one. Finally, compared with a random scheme, simulation results further verify the advantage of our deployment model.
View subspaces for indexing and retrieval of 3D models
NASA Astrophysics Data System (ADS)
Dutagaci, Helin; Godil, Afzal; Sankur, Bülent; Yemez, Yücel
2010-02-01
View-based indexing schemes for 3D object retrieval are gaining popularity since they provide good retrieval results. These schemes are coherent with the theory that humans recognize objects based on their 2D appearances. The viewbased techniques also allow users to search with various queries such as binary images, range images and even 2D sketches. The previous view-based techniques use classical 2D shape descriptors such as Fourier invariants, Zernike moments, Scale Invariant Feature Transform-based local features and 2D Digital Fourier Transform coefficients. These methods describe each object independent of others. In this work, we explore data driven subspace models, such as Principal Component Analysis, Independent Component Analysis and Nonnegative Matrix Factorization to describe the shape information of the views. We treat the depth images obtained from various points of the view sphere as 2D intensity images and train a subspace to extract the inherent structure of the views within a database. We also show the benefit of categorizing shapes according to their eigenvalue spread. Both the shape categorization and data-driven feature set conjectures are tested on the PSB database and compared with the competitor view-based 3D shape retrieval algorithms.
Lee, Tian-Fu; Liu, Chuan-Ming
2013-06-01
A smart-card based authentication scheme for telecare medicine information systems enables patients, doctors, nurses, health visitors and the medicine information systems to establish a secure communication platform through public networks. Zhu recently presented an improved authentication scheme in order to solve the weakness of the authentication scheme of Wei et al., where the off-line password guessing attacks cannot be resisted. This investigation indicates that the improved scheme of Zhu has some faults such that the authentication scheme cannot execute correctly and is vulnerable to the attack of parallel sessions. Additionally, an enhanced authentication scheme based on the scheme of Zhu is proposed. The enhanced scheme not only avoids the weakness in the original scheme, but also provides users' anonymity and authenticated key agreements for secure data communications.
Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming
2016-01-01
With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.
ID-based encryption scheme with revocation
NASA Astrophysics Data System (ADS)
Othman, Hafizul Azrie; Ismail, Eddie Shahril
2017-04-01
In 2015, Meshram proposed an efficient ID-based cryptographic encryption based on the difficulty of solving discrete logarithm and integer-factoring problems. The scheme was pairing free and claimed to be secure against adaptive chosen plaintext attacks (CPA). Later, Tan et al. proved that the scheme was insecure by presenting a method to recover the secret master key and to obtain prime factorization of modulo n. In this paper, we propose a new pairing-free ID-based encryption scheme with revocation based on Meshram's ID-based encryption scheme, which is also secure against Tan et al.'s attacks.
A secure biometrics-based authentication scheme for telecare medicine information systems.
Yan, Xiaopeng; Li, Weiheng; Li, Ping; Wang, Jiantao; Hao, Xinhong; Gong, Peng
2013-10-01
The telecare medicine information system (TMIS) allows patients and doctors to access medical services or medical information at remote sites. Therefore, it could bring us very big convenient. To safeguard patients' privacy, authentication schemes for the TMIS attracted wide attention. Recently, Tan proposed an efficient biometrics-based authentication scheme for the TMIS and claimed their scheme could withstand various attacks. However, in this paper, we point out that Tan's scheme is vulnerable to the Denial-of-Service attack. To enhance security, we also propose an improved scheme based on Tan's work. Security and performance analysis shows our scheme not only could overcome weakness in Tan's scheme but also has better performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, John Nicolas; Fish, Jacob; Waisman, Haim
Two heuristic strategies intended to enhance the performance of the generalized global basis (GGB) method [H. Waisman, J. Fish, R.S. Tuminaro, J. Shadid, The Generalized Global Basis (GGB) method, International Journal for Numerical Methods in Engineering 61(8), 1243-1269] applied to nonlinear systems are presented. The standard GGB accelerates a multigrid scheme by an additional coarse grid correction that filters out slowly converging modes. This correction requires a potentially costly eigen calculation. This paper considers reusing previously computed eigenspace information. The GGB? scheme enriches the prolongation operator with new eigenvectors while the modified method (MGGB) selectively reuses the same prolongation. Bothmore » methods use the criteria of principal angles between subspaces spanned between the previous and current prolongation operators. Numerical examples clearly indicate significant time savings in particular for the MGGB scheme.« less
Nuclear Data Sheets for A = 161
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reich C. W.; Reich C.W.
2011-10-01
The experimental results from the various reaction and radioactive decay studies leading to nuclides in the A = 161 mass chain have been reviewed. Nuclides ranging from Sm (Z = 62) through Os (Z = 76) are included, with Os being a new entry based on a recently reported study. These data are summarized and presented, together with adopted level schemes and properties. This work supersedes the previous evaluation (2000Re14) of the data on these nuclides.
Investigation of the Emissivity and Suitability of a Carbon Thin Film for Terahertz Absorbers
2016-06-01
Carbonization In order to verify whether the carbon soot coated THz sensor produces sufficient spectral emissivity for IR-based readout, dummy test...ABSTRACT (maximum 200 words) The main goal of this work is to optimize the emissivity of terahertz (THz) thermal sensors by deposition of a carbon thin...film. Previously, these thermal sensors were designed to detect THz radiation utilizing metamaterials in a complicated optical probing scheme. We
A computerized scheme for lung nodule detection in multiprojection chest radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo Wei; Li Qiang; Boyce, Sarah J.
2012-04-15
Purpose: Our previous study indicated that multiprojection chest radiography could significantly improve radiologists' performance for lung nodule detection in clinical practice. In this study, the authors further verify that multiprojection chest radiography can greatly improve the performance of a computer-aided diagnostic (CAD) scheme. Methods: Our database consisted of 59 subjects, including 43 subjects with 45 nodules and 16 subjects without nodules. The 45 nodules included 7 real and 38 simulated ones. The authors developed a conventional CAD scheme and a new fusion CAD scheme to detect lung nodules. The conventional CAD scheme consisted of four steps for (1) identification ofmore » initial nodule candidates inside lungs, (2) nodule candidate segmentation based on dynamic programming, (3) extraction of 33 features from nodule candidates, and (4) false positive reduction using a piecewise linear classifier. The conventional CAD scheme processed each of the three projection images of a subject independently and discarded the correlation information between the three images. The fusion CAD scheme included the four steps in the conventional CAD scheme and two additional steps for (5) registration of all candidates in the three images of a subject, and (6) integration of correlation information between the registered candidates in the three images. The integration step retained all candidates detected at least twice in the three images of a subject and removed those detected only once in the three images as false positives. A leave-one-subject-out testing method was used for evaluation of the performance levels of the two CAD schemes. Results: At the sensitivities of 70%, 65%, and 60%, our conventional CAD scheme reported 14.7, 11.3, and 8.6 false positives per image, respectively, whereas our fusion CAD scheme reported 3.9, 1.9, and 1.2 false positives per image, and 5.5, 2.8, and 1.7 false positives per patient, respectively. The low performance of the conventional CAD scheme may be attributed to the high noise level in chest radiography, and the small size and low contrast of most nodules. Conclusions: This study indicated that the fusion of correlation information in multiprojection chest radiography can markedly improve the performance of CAD scheme for lung nodule detection.« less
Almanaseer, Naser; Sankarasubramanian, A.; Bales, Jerad
2014-01-01
Recent studies have found a significant association between climatic variability and basin hydroclimatology, particularly groundwater levels, over the southeast United States. The research reported in this paper evaluates the potential in developing 6-month-ahead groundwater-level forecasts based on the precipitation forecasts from ECHAM 4.5 General Circulation Model Forced with Sea Surface Temperature forecasts. Ten groundwater wells and nine streamgauges from the USGS Groundwater Climate Response Network and Hydro-Climatic Data Network were selected to represent groundwater and surface water flows, respectively, having minimal anthropogenic influences within the Flint River Basin in Georgia, United States. The writers employ two low-dimensional models [principle component regression (PCR) and canonical correlation analysis (CCA)] for predicting groundwater and streamflow at both seasonal and monthly timescales. Three modeling schemes are considered at the beginning of January to predict winter (January, February, and March) and spring (April, May, and June) streamflow and groundwater for the selected sites within the Flint River Basin. The first scheme (model 1) is a null model and is developed using PCR for every streamflow and groundwater site using previous 3-month observations (October, November, and December) available at that particular site as predictors. Modeling schemes 2 and 3 are developed using PCR and CCA, respectively, to evaluate the role of precipitation forecasts in improving monthly and seasonal groundwater predictions. Modeling scheme 3, which employs a CCA approach, is developed for each site by considering observed groundwater levels from nearby sites as predictands. The performance of these three schemes is evaluated using two metrics (correlation coefficient and relative RMS error) by developing groundwater-level forecasts based on leave-five-out cross-validation. Results from the research reported in this paper show that using precipitation forecasts in climate models improves the ability to predict the interannual variability of winter and spring streamflow and groundwater levels over the basin. However, significant conditional bias exists in all the three modeling schemes, which indicates the need to consider improved modeling schemes as well as the availability of longer time-series of observed hydroclimatic information over the basin.
Carlson, Josh J; Sullivan, Sean D; Garrison, Louis P; Neumann, Peter J; Veenstra, David L
2010-08-01
To identify, categorize and examine performance-based health outcomes reimbursement schemes for medical technology. We performed a review of performance-based health outcomes reimbursement schemes over the past 10 years (7/98-010/09) using publicly available databases, web and grey literature searches, and input from healthcare reimbursement experts. We developed a taxonomy of scheme types by inductively organizing the schemes identified according to the timing, execution, and health outcomes measured in the schemes. Our search yielded 34 coverage with evidence development schemes, 10 conditional treatment continuation schemes, and 14 performance-linked reimbursement schemes. The majority of schemes are in Europe and Australia, with an increasing number in Canada and the U.S. These schemes have the potential to alter the reimbursement and pricing landscape for medical technology, but significant challenges, including high transaction costs and insufficient information systems, may limit their long-term impact. Future studies regarding experiences and outcomes of implemented schemes are necessary. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Endelt, B.
2017-09-01
Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.
Practical scheme for error control using feedback
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarovar, Mohan; Milburn, Gerard J.; Ahn, Charlene
2004-05-01
We describe a scheme for quantum-error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols [for example, Ahn et al. Phys. Rev. A 65, 042301 (2001)], is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit flips. We also consider implementation in a solid-state quantum-computation architecture and estimate the maximal error rate that could be corrected with current technology.
A New Scheme for Probabilistic Teleportation and Its Potential Applications
NASA Astrophysics Data System (ADS)
Wei, Jia-Hua; Dai, Hong-Yi; Zhang, Ming
2013-12-01
We propose a novel scheme to probabilistically teleport an unknown two-level quantum state when the information of the partially entangled state is only available for the sender. This is in contrast with the fact that the receiver must know the non-maximally entangled state in previous typical schemes for the teleportation. Additionally, we illustrate two potential applications of the novel scheme for probabilistic teleportation from a sender to a receiver with the help of an assistant, who plays distinct roles under different communication conditions, and our results show that the novel proposal could enlarge the applied range of probabilistic teleportation.
Plume trajectory formation under stack tip self-enveloping
NASA Astrophysics Data System (ADS)
Gribkov, A. M.; Zroichikov, N. A.; Prokhorov, V. B.
2017-10-01
The phenomenon of stack tip self-enveloping and its influence upon the conditions of plume formation and on the trajectory of its motion are considered. Processes are described occurring in the initial part of the plume while the interaction between vertically directed flue gases outflowing from the stack and a horizontally directed moving air flow at high wind velocities that lead to the formation of a flag-like plume. Conditions responsible for the origin and evolution of interaction between these flows are demonstrated. For the first time, a plume formed under these conditions without bifurcation is registered. A photo image thereof is presented. A scheme for the calculation of the motion of a plume trajectory is proposed, the quantitative characteristics of which are obtained based on field observations. The wind velocity and direction, air temperature, and atmospheric turbulence at the level of the initial part of the trajectory have been obtained based on data obtained from an automatic meteorological system (mounted on the outer parts of a 250 m high stack no. 1 at the Naberezhnye Chelny TEPP plant) as well as based on the results of photographing and theodolite sighting of smoke puffs' trajectory taking into account their velocity within its initial part. The calculation scheme is supplemented with a new acting force—the force of self-enveloping. Based on the comparison of the new calculation scheme with the previous one, a significant contribution of this force to the development of the trajectory is revealed. A comparison of the natural full-scale data with the results of the calculation according to the proposed new scheme is made. The proposed calculation scheme has allowed us to extend the application of the existing technique to the range of high wind velocities. This approach would make it possible to simulate and investigate the trajectory and full rising height of the calculated the length above the mouth of flue-pipes, depending on various modal and meteorological parameters under the interrelation between the dynamic and thermal components of the rise as well as to obtain a universal calculation expression for determining the height of the plume rise for different classes of atmospheric stability.
Mishra, Dheerendra
2015-03-01
Smart card based authentication and key agreement schemes for telecare medicine information systems (TMIS) enable doctors, nurses, patients and health visitors to use smart cards for secure login to medical information systems. In recent years, several authentication and key agreement schemes have been proposed to present secure and efficient solution for TMIS. Most of the existing authentication schemes for TMIS have either higher computation overhead or are vulnerable to attacks. To reduce the computational overhead and enhance the security, Lee recently proposed an authentication and key agreement scheme using chaotic maps for TMIS. Xu et al. also proposed a password based authentication and key agreement scheme for TMIS using elliptic curve cryptography. Both the schemes provide better efficiency from the conventional public key cryptography based schemes. These schemes are important as they present an efficient solution for TMIS. We analyze the security of both Lee's scheme and Xu et al.'s schemes. Unfortunately, we identify that both the schemes are vulnerable to denial of service attack. To understand the security failures of these cryptographic schemes which are the key of patching existing schemes and designing future schemes, we demonstrate the security loopholes of Lee's scheme and Xu et al.'s scheme in this paper.
Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl
2015-11-01
Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.
Improved field free line magnetic particle imaging using saddle coils.
Erbe, Marlitt; Sattel, Timo F; Buzug, Thorsten M
2013-12-01
Magnetic particle imaging (MPI) is a novel tracer-based imaging method detecting the distribution of superparamagnetic iron oxide (SPIO) nanoparticles in vivo in three dimensions and in real time. Conventionally, MPI uses the signal emitted by SPIO tracer material located at a field free point (FFP). To increase the sensitivity of MPI, however, an alternative encoding scheme collecting the particle signal along a field free line (FFL) was proposed. To provide the magnetic fields needed for line imaging in MPI, a very efficient scanner setup regarding electrical power consumption is needed. At the same time, the scanner needs to provide a high magnetic field homogeneity along the FFL as well as parallel to its alignment to prevent the appearance of artifacts, using efficient radon-based reconstruction methods arising for a line encoding scheme. This work presents a dynamic FFL scanner setup for MPI that outperforms all previously presented setups in electrical power consumption as well as magnetic field quality.
Homography-based control scheme for mobile robots with nonholonomic and field-of-view constraints.
López-Nicolás, Gonzalo; Gans, Nicholas R; Bhattacharya, Sourabh; Sagüés, Carlos; Guerrero, Josechu J; Hutchinson, Seth
2010-08-01
In this paper, we present a visual servo controller that effects optimal paths for a nonholonomic differential drive robot with field-of-view constraints imposed by the vision system. The control scheme relies on the computation of homographies between current and goal images, but unlike previous homography-based methods, it does not use the homography to compute estimates of pose parameters. Instead, the control laws are directly expressed in terms of individual entries in the homography matrix. In particular, we develop individual control laws for the three path classes that define the language of optimal paths: rotations, straight-line segments, and logarithmic spirals. These control laws, as well as the switching conditions that define how to sequence path segments, are defined in terms of the entries of homography matrices. The selection of the corresponding control law requires the homography decomposition before starting the navigation. We provide a controllability and stability analysis for our system and give experimental results.
A Moving Mesh Finite Element Algorithm for Singular Problems in Two and Three Space Dimensions
NASA Astrophysics Data System (ADS)
Li, Ruo; Tang, Tao; Zhang, Pingwen
2002-04-01
A framework for adaptive meshes based on the Hamilton-Schoen-Yau theory was proposed by Dvinsky. In a recent work (2001, J. Comput. Phys.170, 562-588), we extended Dvinsky's method to provide an efficient moving mesh algorithm which compared favorably with the previously proposed schemes in terms of simplicity and reliability. In this work, we will further extend the moving mesh methods based on harmonic maps to deal with mesh adaptation in three space dimensions. In obtaining the variational mesh, we will solve an optimization problem with some appropriate constraints, which is in contrast to the traditional method of solving the Euler-Lagrange equation directly. The key idea of this approach is to update the interior and boundary grids simultaneously, rather than considering them separately. Application of the proposed moving mesh scheme is illustrated with some two- and three-dimensional problems with large solution gradients. The numerical experiments show that our methods can accurately resolve detail features of singular problems in 3D.
Stuebner, Michael; Haider, Mansoor A
2010-06-18
A new and efficient method for numerical solution of the continuous spectrum biphasic poroviscoelastic (BPVE) model of articular cartilage is presented. Development of the method is based on a composite Gauss-Legendre quadrature approximation of the continuous spectrum relaxation function that leads to an exponential series representation. The separability property of the exponential terms in the series is exploited to develop a numerical scheme that can be reduced to an update rule requiring retention of the strain history at only the previous time step. The cost of the resulting temporal discretization scheme is O(N) for N time steps. Application and calibration of the method is illustrated in the context of a finite difference solution of the one-dimensional confined compression BPVE stress-relaxation problem. Accuracy of the numerical method is demonstrated by comparison to a theoretical Laplace transform solution for a range of viscoelastic relaxation times that are representative of articular cartilage. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Di Noia, Antonio; Hasekamp, Otto P.; Wu, Lianghai; van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John E.
2017-11-01
In this paper, an algorithm for the retrieval of aerosol and land surface properties from airborne spectropolarimetric measurements - combining neural networks and an iterative scheme based on Phillips-Tikhonov regularization - is described. The algorithm - which is an extension of a scheme previously designed for ground-based retrievals - is applied to measurements from the Research Scanning Polarimeter (RSP) on board the NASA ER-2 aircraft. A neural network, trained on a large data set of synthetic measurements, is applied to perform aerosol retrievals from real RSP data, and the neural network retrievals are subsequently used as a first guess for the Phillips-Tikhonov retrieval. The resulting algorithm appears capable of accurately retrieving aerosol optical thickness, fine-mode effective radius and aerosol layer height from RSP data. Among the advantages of using a neural network as initial guess for an iterative algorithm are a decrease in processing time and an increase in the number of converging retrievals.
Tierney, Brian D.; Choi, Sukwon; DasGupta, Sandeepan; ...
2017-08-16
A distributed impedance “field cage” structure is proposed and evaluated for electric field control in GaN-based, lateral high electron mobility transistors (HEMTs) operating as kilovolt-range power devices. In this structure, a resistive voltage divider is used to control the electric field throughout the active region. The structure complements earlier proposals utilizing floating field plates that did not employ resistively connected elements. Transient results, not previously reported for field plate schemes using either floating or resistively connected field plates, are presented for ramps of dV ds /dt = 100 V/ns. For both DC and transient results, the voltage between the gatemore » and drain is laterally distributed, ensuring the electric field profile between the gate and drain remains below the critical breakdown field as the source-to-drain voltage is increased. Our scheme indicates promise for achieving breakdown voltage scalability to a few kV.« less
An investigation of error characteristics and coding performance
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1992-01-01
The performance of forward error correcting coding schemes on errors anticipated for the Earth Observation System (EOS) Ku-band downlink are studied. The EOS transmits picture frame data to the ground via the Telemetry Data Relay Satellite System (TDRSS) to a ground-based receiver at White Sands. Due to unintentional RF interference from other systems operating in the Ku band, the noise at the receiver is non-Gaussian which may result in non-random errors output by the demodulator. That is, the downlink channel cannot be modeled by a simple memoryless Gaussian-noise channel. From previous experience, it is believed that those errors are bursty. The research proceeded by developing a computer based simulation, called Communication Link Error ANalysis (CLEAN), to model the downlink errors, forward error correcting schemes, and interleavers used with TDRSS. To date, the bulk of CLEAN was written, documented, debugged, and verified. The procedures for utilizing CLEAN to investigate code performance were established and are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tierney, Brian D.; Choi, Sukwon; DasGupta, Sandeepan
A distributed impedance “field cage” structure is proposed and evaluated for electric field control in GaN-based, lateral high electron mobility transistors (HEMTs) operating as kilovolt-range power devices. In this structure, a resistive voltage divider is used to control the electric field throughout the active region. The structure complements earlier proposals utilizing floating field plates that did not employ resistively connected elements. Transient results, not previously reported for field plate schemes using either floating or resistively connected field plates, are presented for ramps of dV ds /dt = 100 V/ns. For both DC and transient results, the voltage between the gatemore » and drain is laterally distributed, ensuring the electric field profile between the gate and drain remains below the critical breakdown field as the source-to-drain voltage is increased. Our scheme indicates promise for achieving breakdown voltage scalability to a few kV.« less
Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming
2016-01-01
With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.’s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks. PMID:26866606
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Chen, Baode; Lau, William K. M. (Technical Monitor)
2002-01-01
Previous studies (Chao 2000, Chao and Chen 2001, Kirtman and Schneider 2000, Sumi 1992) have shown that, by means of one of several model design changes, the structure of the ITCZ in an aqua-planet model with globally uniform SST and solar angle (U-SST-SA) can change between a single ITCZ at the equator and a double ITCZ straddling the equator. These model design changes include switching to a different cumulus parameterization scheme (e.g., from relaxed Arakawa Schubert scheme (RAS) to moist convective adjustment scheme (MCA)), changes within the cumulus parameterization scheme, and changes in other aspects of the model, such as horizontal resolution. Sometimes only one component of the double ITCZ shows up; but still this is an ITCZ away from the equator, quite distinct from a single ITCZ over the equator. Since these model results were obtained by different investigators using different models which have yielded reasonable general circulation, they are considered as reliable. Chao and Chen (2001; hereafter CC01) have made an initial attempt to interpret these findings based on the concept of rotational ITCZ attractors that they introduced. The purpose of this paper is to offer a more complete interpretation.
The, Yu-Kai; Fernandes, Jacqueline; Popa, M. Oana; Alekov, Alexi K.; Timmer, Jens; Lerche, Holger
2006-01-01
Voltage-gated Na+ channels play a fundamental role in the excitability of nerve and muscle cells. Defects in fast Na+ channel inactivation can cause hereditary muscle diseases with hyper- or hypoexcitability of the sarcolemma. To explore the kinetics and gating mechanisms of noninactivating muscle Na+ channels on a molecular level, we analyzed single channel currents from wild-type and five mutant Na+ channels. The mutations were localized in different protein regions which have been previously shown to be important for fast inactivation (D3-D4-linker, D3/S4-S5, D4/S4-S5, D4/S6) and exhibited distinct grades of defective fast inactivation with varying levels of persistent Na+ currents caused by late channel reopenings. Different gating schemes were fitted to the data using hidden Markov models with a correction for time interval omission and compared statistically. For all investigated channels including the wild-type, two open states were necessary to describe our data. Whereas one inactivated state was sufficient to fit the single channel behavior of wild-type channels, modeling the mutants with impaired fast inactivation revealed evidence for several inactivated states. We propose a single gating scheme with two open and three inactivated states to describe the behavior of all five examined mutants. This scheme provides a biological interpretation of the collected data, based on previous investigations in voltage-gated Na+ and K+ channels. PMID:16513781
Quantum gambling using three nonorthogonal states
NASA Astrophysics Data System (ADS)
Hwang, Won-Young; Matsumoto, Keiji
2002-11-01
We provide a quantum gambling protocol using three (symmetric) nonorthogonal states. The bias of the proposed protocol is less than that of previous ones, making it more practical. We show that the proposed scheme is secure against nonentanglement attacks. The security of the proposed scheme against entanglement attacks is shown heuristically.
Secondary School Students' Reasoning about Evolution
ERIC Educational Resources Information Center
To, Cheryl; Tenenbaum, Harriet R.; Hogh, Henriette
2017-01-01
This study examined age differences in young people's understanding of evolution theory in secondary school. A second aim of this study was to propose a new coding scheme that more accurately described students' conceptual understanding about evolutionary theory. We argue that coding schemes adopted in previous research may have overestimated…
An efficient and provable secure revocable identity-based encryption scheme.
Wang, Changji; Li, Yuan; Xia, Xiaonan; Zheng, Kangjia
2014-01-01
Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE) has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.
Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling
2016-01-01
Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model. PMID:27898703
Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling
2016-01-01
Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model.
Efficient and Anonymous Authentication Scheme for Wireless Body Area Networks.
Wu, Libing; Zhang, Yubo; Li, Li; Shen, Jian
2016-06-01
As a significant part of the Internet of Things (IoT), Wireless Body Area Network (WBAN) has attract much attention in this years. In WBANs, sensors placed in or around the human body collect the sensitive data of the body and transmit it through an open wireless channel in which the messages may be intercepted, modified, etc. Recently, Wang et al. presented a new anonymous authentication scheme for WBANs and claimed that their scheme can solve the security problems in the previous schemes. Unfortunately, we demonstrate that their scheme cannot withstand impersonation attack. Either an adversary or a malicious legal client could impersonate another legal client to the application provider. In this paper, we give the detailed weakness analysis of Wang et al.'s scheme at first. Then we present a novel anonymous authentication scheme for WBANs and prove that it's secure under a random oracle model. At last, we demonstrate that our presented anonymous authentication scheme for WBANs is more suitable for practical application than Wang et al.'s scheme due to better security and performance. Compared with Wang et al.'s scheme, the computation cost of our scheme in WBANs has reduced by about 31.58%.
A provably-secure ECC-based authentication scheme for wireless sensor networks.
Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho
2014-11-06
A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes.
A Provably-Secure ECC-Based Authentication Scheme for Wireless Sensor Networks
Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho
2014-01-01
A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes. PMID:25384009
A soft-hard combination-based cooperative spectrum sensing scheme for cognitive radio networks.
Do, Nhu Tri; An, Beongku
2015-02-13
In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT)-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of -15 dB). In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.
A high-order Lagrangian-decoupling method for the incompressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Ho, Lee-Wing; Maday, Yvon; Patera, Anthony T.; Ronquist, Einar M.
1989-01-01
A high-order Lagrangian-decoupling method is presented for the unsteady convection-diffusion and incompressible Navier-Stokes equations. The method is based upon: (1) Lagrangian variational forms that reduce the convection-diffusion equation to a symmetric initial value problem; (2) implicit high-order backward-differentiation finite-difference schemes for integration along characteristics; (3) finite element or spectral element spatial discretizations; and (4) mesh-invariance procedures and high-order explicit time-stepping schemes for deducing function values at convected space-time points. The method improves upon previous finite element characteristic methods through the systematic and efficient extension to high order accuracy, and the introduction of a simple structure-preserving characteristic-foot calculation procedure which is readily implemented on modern architectures. The new method is significantly more efficient than explicit-convection schemes for the Navier-Stokes equations due to the decoupling of the convection and Stokes operators and the attendant increase in temporal stability. Numerous numerical examples are given for the convection-diffusion and Navier-Stokes equations for the particular case of a spectral element spatial discretization.
Experimental realization of self-guided quantum coherence freezing
NASA Astrophysics Data System (ADS)
Yu, Shang; Wang, Yi-Tao; Ke, Zhi-Jin; Liu, Wei; Zhang, Wen-Hao; Chen, Geng; Tang, Jian-Shun; Li, Chuan-Feng; Guo, Guang-Can
2017-12-01
Quantum coherence is the most essential characteristic of quantum physics, specifcially, when it is subject to the resource-theoretical framework, it is considered as the most fundamental resource for quantum techniques. Other quantum resources, e.g., entanglement, are all based on coherence. Therefore, it becomes urgently important to learn how to preserve coherence in quantum channels. The best preservation is coherence freezing, which has been studied recently. However, in these studies, the freezing condition is theoretically calculated, and there still lacks a practical way to achieve this freezing; in addition the channels are usually fixed, but actually, there are also degrees of freedom that can be used to adapt the channels to quantum states. Here we develop a self-guided quantum coherence freezing method, which can guide either the quantum channels (tunable-channel scheme with upgraded channels) or the initial state (fixed-channel scheme) to the coherence-freezing zone from any starting estimate. Specifically, in the fixed-channel scheme, the final-iterative quantum states all satisfy the previously calculated freezing condition. This coincidence demonstrates the validity of our method. Our work will be helpful for the better protection of quantum coherence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gettelman, A.; Liu, Xiaohong; Ghan, Steven J.
2010-09-28
A process-based treatment of ice supersaturation and ice-nucleation is implemented in the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). The new scheme is designed to allow (1) supersaturation with respect to ice, (2) ice nucleation by aerosol particles and (3) ice cloud cover consistent with ice microphysics. The scheme is implemented with a 4-class 2 moment microphysics code and is used to evaluate ice cloud nucleation mechanisms and supersaturation in CAM. The new model is able to reproduce field observations of ice mass and mixed phase cloud occurrence better than previous versions of the model. Simulations indicatemore » heterogeneous freezing and contact nucleation on dust are both potentially important over remote areas of the Arctic. Cloud forcing and hence climate is sensitive to different formulations of the ice microphysics. Arctic radiative fluxes are sensitive to the parameterization of ice clouds. These results indicate that ice clouds are potentially an important part of understanding cloud forcing and potential cloud feedbacks, particularly in the Arctic.« less
Particle-in-cell simulation of x-ray wakefield acceleration and betatron radiation in nanotubes
Zhang, Xiaomei; Tajima, Toshiki; Farinella, Deano; ...
2016-10-18
Though wakefield acceleration in crystal channels has been previously proposed, x-ray wakefield acceleration has only recently become a realistic possibility since the invention of the single-cycled optical laser compression technique. We investigate the acceleration due to a wakefield induced by a coherent, ultrashort x-ray pulse guided by a nanoscale channel inside a solid material. By two-dimensional particle-in-cell computer simulations, we show that an acceleration gradient of TeV/cm is attainable. This is about 3 orders of magnitude stronger than that of the conventional plasma-based wakefield accelerations, which implies the possibility of an extremely compact scheme to attain ultrahigh energies. In additionmore » to particle acceleration, this scheme can also induce the emission of high energy photons at ~O(10–100) MeV. Here, our simulations confirm such high energy photon emissions, which is in contrast with that induced by the optical laser driven wakefield scheme. In addition to this, the significantly improved emittance of the energetic electrons has been discussed.« less
Unsupervised feature relevance analysis applied to improve ECG heartbeat clustering.
Rodríguez-Sotelo, J L; Peluffo-Ordoñez, D; Cuesta-Frau, D; Castellanos-Domínguez, G
2012-10-01
The computer-assisted analysis of biomedical records has become an essential tool in clinical settings. However, current devices provide a growing amount of data that often exceeds the processing capacity of normal computers. As this amount of information rises, new demands for more efficient data extracting methods appear. This paper addresses the task of data mining in physiological records using a feature selection scheme. An unsupervised method based on relevance analysis is described. This scheme uses a least-squares optimization of the input feature matrix in a single iteration. The output of the algorithm is a feature weighting vector. The performance of the method was assessed using a heartbeat clustering test on real ECG records. The quantitative cluster validity measures yielded a correctly classified heartbeat rate of 98.69% (specificity), 85.88% (sensitivity) and 95.04% (general clustering performance), which is even higher than the performance achieved by other similar ECG clustering studies. The number of features was reduced on average from 100 to 18, and the temporal cost was a 43% lower than in previous ECG clustering schemes. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Clunie, David A.
2000-05-01
Proprietary compression schemes have a cost and risk associated with their support, end of life and interoperability. Standards reduce this cost and risk. The new JPEG-LS process (ISO/IEC 14495-1), and the lossless mode of the proposed JPEG 2000 scheme (ISO/IEC CD15444-1), new standard schemes that may be incorporated into DICOM, are evaluated here. Three thousand, six hundred and seventy-nine (3,679) single frame grayscale images from multiple anatomical regions, modalities and vendors, were tested. For all images combined JPEG-LS and JPEG 2000 performed equally well (3.81), almost as well as CALIC (3.91), a complex predictive scheme used only as a benchmark. Both out-performed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM). Text dictionary schemes performed poorly (gzip 2.38), as did image dictionary schemes without statistical modeling (PNG 2.76). Proprietary transform based schemes did not perform as well as JPEG-LS or JPEG 2000 (S+P Arithmetic 3.4, CREW 3.56). Stratified by modality, JPEG-LS compressed CT images (4.00), MR (3.59), NM (5.98), US (3.4), IO (2.66), CR (3.64), DX (2.43), and MG (2.62). CALIC always achieved the highest compression except for one modality for which JPEG-LS did better (MG digital vendor A JPEG-LS 4.02, CALIC 4.01). JPEG-LS outperformed existing JPEG for all modalities. The use of standard schemes can achieve state of the art performance, regardless of modality, JPEG-LS is simple, easy to implement, consumes less memory, and is faster than JPEG 2000, though JPEG 2000 will offer lossy and progressive transmission. It is recommended that DICOM add transfer syntaxes for both JPEG-LS and JPEG 2000.
A Novel Passive Tracking Scheme Exploiting Geometric and Intercept Theorems
Zhou, Biao; Sun, Chao; Ahn, Deockhyeon; Kim, Youngok
2018-01-01
Passive tracking aims to track targets without assistant devices, that is, device-free targets. Passive tracking based on Radio Frequency (RF) Tomography in wireless sensor networks has recently been addressed as an emerging field. The passive tracking scheme using geometric theorems (GTs) is one of the most popular RF Tomography schemes, because the GT-based method can effectively mitigate the demand for a high density of wireless nodes. In the GT-based tracking scheme, the tracking scenario is considered as a two-dimensional geometric topology and then geometric theorems are applied to estimate crossing points (CPs) of the device-free target on line-of-sight links (LOSLs), which reveal the target’s trajectory information in a discrete form. In this paper, we review existing GT-based tracking schemes, and then propose a novel passive tracking scheme by exploiting the Intercept Theorem (IT). To create an IT-based CP estimation scheme available in the noisy non-parallel LOSL situation, we develop the equal-ratio traverse (ERT) method. Finally, we analyze properties of three GT-based tracking algorithms and the performance of these schemes is evaluated experimentally under various trajectories, node densities, and noisy topologies. Analysis of experimental results shows that tracking schemes exploiting geometric theorems can achieve remarkable positioning accuracy even under rather a low density of wireless nodes. Moreover, the proposed IT scheme can provide generally finer tracking accuracy under even lower node density and noisier topologies, in comparison to other schemes. PMID:29562621
Security Analysis and Improvement of an Anonymous Authentication Scheme for Roaming Services
Lee, Youngsook; Paik, Juryon
2014-01-01
An anonymous authentication scheme for roaming services in global mobility networks allows a mobile user visiting a foreign network to achieve mutual authentication and session key establishment with the foreign-network operator in an anonymous manner. In this work, we revisit He et al.'s anonymous authentication scheme for roaming services and present previously unpublished security weaknesses in the scheme: (1) it fails to provide user anonymity against any third party as well as the foreign agent, (2) it cannot protect the passwords of mobile users due to its vulnerability to an offline dictionary attack, and (3) it does not achieve session-key security against a man-in-the-middle attack. We also show how the security weaknesses of He et al.'s scheme can be addressed without degrading the efficiency of the scheme. PMID:25302330
Security analysis and improvement of an anonymous authentication scheme for roaming services.
Lee, Youngsook; Paik, Juryon
2014-01-01
An anonymous authentication scheme for roaming services in global mobility networks allows a mobile user visiting a foreign network to achieve mutual authentication and session key establishment with the foreign-network operator in an anonymous manner. In this work, we revisit He et al.'s anonymous authentication scheme for roaming services and present previously unpublished security weaknesses in the scheme: (1) it fails to provide user anonymity against any third party as well as the foreign agent, (2) it cannot protect the passwords of mobile users due to its vulnerability to an offline dictionary attack, and (3) it does not achieve session-key security against a man-in-the-middle attack. We also show how the security weaknesses of He et al.'s scheme can be addressed without degrading the efficiency of the scheme.
Xu, Qian; Tan, Chengxiang; Fan, Zhijie; Zhu, Wenye; Xiao, Ya; Cheng, Fujia
2018-01-01
Nowadays, fog computing provides computation, storage, and application services to end users in the Internet of Things. One of the major concerns in fog computing systems is how fine-grained access control can be imposed. As a logical combination of attribute-based encryption and attribute-based signature, Attribute-based Signcryption (ABSC) can provide confidentiality and anonymous authentication for sensitive data and is more efficient than traditional “encrypt-then-sign” or “sign-then-encrypt” strategy. Thus, ABSC is suitable for fine-grained access control in a semi-trusted cloud environment and is gaining more and more attention recently. However, in many existing ABSC systems, the computation cost required for the end users in signcryption and designcryption is linear with the complexity of signing and encryption access policy. Moreover, only a single authority that is responsible for attribute management and key generation exists in the previous proposed ABSC schemes, whereas in reality, mostly, different authorities monitor different attributes of the user. In this paper, we propose OMDAC-ABSC, a novel data access control scheme based on Ciphertext-Policy ABSC, to provide data confidentiality, fine-grained control, and anonymous authentication in a multi-authority fog computing system. The signcryption and designcryption overhead for the user is significantly reduced by outsourcing the undesirable computation operations to fog nodes. The proposed scheme is proven to be secure in the standard model and can provide attribute revocation and public verifiability. The security analysis, asymptotic complexity comparison, and implementation results indicate that our construction can balance the security goals with practical efficiency in computation. PMID:29772840
An Improved Biometrics-Based Remote User Authentication Scheme with User Anonymity
Kumari, Saru
2013-01-01
The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability. PMID:24350272
An improved biometrics-based remote user authentication scheme with user anonymity.
Khan, Muhammad Khurram; Kumari, Saru
2013-01-01
The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability.
Provably secure identity-based identification and signature schemes from code assumptions
Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940
Provably secure identity-based identification and signature schemes from code assumptions.
Song, Bo; Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.
Adaptive color halftoning for minimum perceived error using the blue noise mask
NASA Astrophysics Data System (ADS)
Yu, Qing; Parker, Kevin J.
1997-04-01
Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.
A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations
NASA Technical Reports Server (NTRS)
Dydson, Roger W.; Goodrich, John W.
2000-01-01
Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.
Critical analysis of fragment-orbital DFT schemes for the calculation of electronic coupling values
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schober, Christoph; Reuter, Karsten; Oberhofer, Harald, E-mail: harald.oberhofer@ch.tum.de
2016-02-07
We present a critical analysis of the popular fragment-orbital density-functional theory (FO-DFT) scheme for the calculation of electronic coupling values. We discuss the characteristics of different possible formulations or “flavors” of the scheme which differ by the number of electrons in the calculation of the fragments and the construction of the Hamiltonian. In addition to two previously described variants based on neutral fragments, we present a third version taking a different route to the approximate diabatic state by explicitly considering charged fragments. In applying these FO-DFT flavors to the two molecular test sets HAB7 (electron transfer) and HAB11 (hole transfer),more » we find that our new scheme gives improved electronic couplings for HAB7 (−6.2% decrease in mean relative signed error) and greatly improved electronic couplings for HAB11 (−15.3% decrease in mean relative signed error). A systematic investigation of the influence of exact exchange on the electronic coupling values shows that the use of hybrid functionals in FO-DFT calculations improves the electronic couplings, giving values close to or even better than more sophisticated constrained DFT calculations. Comparing the accuracy and computational cost of each variant, we devise simple rules to choose the best possible flavor depending on the task. For accuracy, our new scheme with charged-fragment calculations performs best, while numerically more efficient at reasonable accuracy is the variant with neutral fragments.« less
Balanced Central Schemes for the Shallow Water Equations on Unstructured Grids
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron
2004-01-01
We present a two-dimensional, well-balanced, central-upwind scheme for approximating solutions of the shallow water equations in the presence of a stationary bottom topography on triangular meshes. Our starting point is the recent central scheme of Kurganov and Petrova (KP) for approximating solutions of conservation laws on triangular meshes. In order to extend this scheme from systems of conservation laws to systems of balance laws one has to find an appropriate discretization of the source terms. We first show that for general triangulations there is no discretization of the source terms that corresponds to a well-balanced form of the KP scheme. We then derive a new variant of a central scheme that can be balanced on triangular meshes. We note in passing that it is straightforward to extend the KP scheme to general unstructured conformal meshes. This extension allows us to recover our previous well-balanced scheme on Cartesian grids. We conclude with several simulations, verifying the second-order accuracy of our scheme as well as its well-balanced properties.
Arshad, Hamed; Rasoolzadegan, Abbas
2016-11-01
Authentication and key agreement schemes play a very important role in enhancing the level of security of telecare medicine information systems (TMISs). Recently, Amin and Biswas demonstrated that the authentication scheme proposed by Giri et al. is vulnerable to off-line password guessing attacks and privileged insider attacks and also does not provide user anonymity. They also proposed an improved authentication scheme, claiming that it resists various security attacks. However, this paper demonstrates that Amin and Biswas's scheme is defenseless against off-line password guessing attacks and replay attacks and also does not provide perfect forward secrecy. This paper also shows that Giri et al.'s scheme not only suffers from the weaknesses pointed out by Amin and Biswas, but it also is vulnerable to replay attacks and does not provide perfect forward secrecy. Moreover, this paper proposes a novel authentication and key agreement scheme to overcome the mentioned weaknesses. Security and performance analyses show that the proposed scheme not only overcomes the mentioned security weaknesses, but also is more efficient than the previous schemes.
Mishra, Dheerendra; Srinivas, Jangirala; Mukhopadhyay, Sourav
2014-10-01
Advancement in network technology provides new ways to utilize telecare medicine information systems (TMIS) for patient care. Although TMIS usually faces various attacks as the services are provided over the public network. Recently, Jiang et al. proposed a chaotic map-based remote user authentication scheme for TMIS. Their scheme has the merits of low cost and session key agreement using Chaos theory. It enhances the security of the system by resisting various attacks. In this paper, we analyze the security of Jiang et al.'s scheme and demonstrate that their scheme is vulnerable to denial of service attack. Moreover, we demonstrate flaws in password change phase of their scheme. Further, our aim is to propose a new chaos map-based anonymous user authentication scheme for TMIS to overcome the weaknesses of Jiang et al.'s scheme, while also retaining the original merits of their scheme. We also show that our scheme is secure against various known attacks including the attacks found in Jiang et al.'s scheme. The proposed scheme is comparable in terms of the communication and computational overheads with Jiang et al.'s scheme and other related existing schemes. Moreover, we demonstrate the validity of the proposed scheme through the BAN (Burrows, Abadi, and Needham) logic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coombe, D.A.; Snider, R.F.
1980-02-15
ES, CS, and IOS approximations to atom--diatom kinetic cross sections are derived. In doing so, reduced S-matrices in a translational-internal coupling scheme are stressed. This entails the insertion of recently obtained approximate reduced S-matrices in the translational-internal coupling scheme into previously derived general expressions for the kinetic cross sections. Of special interest is the structure (rotational j quantum number dependence) of the kinetic cross sections associated with the Senftleben Beenakker effects and of pure internal state relaxation phenomena. The viscomagnetic effect is used as an illustrative example. It is found in particular that there is a great similarity of structuremore » between the energy sudden (and IOS) approximation and the previously derived distorted wave Born results.« less
Klinkenberg, Don; Thomas, Ekelijn; Artavia, Francisco F Calvo; Bouma, Annemarie
2011-08-01
Design of surveillance programs to detect infections could benefit from more insight into sampling schemes. We address the effect of sampling schemes for Salmonella Enteritidis surveillance in laying hens. Based on experimental estimates for the transmission rate in flocks, and the characteristics of an egg immunological test, we have simulated outbreaks with various sampling schemes, and with the current boot swab program with a 15-week sampling interval. Declaring a flock infected based on a single positive egg was not possible because test specificity was too low. Thus, a threshold number of positive eggs was defined to declare a flock infected, and, for small sample sizes, eggs from previous samplings had to be included in a cumulative sample to guarantee a minimum flock level specificity. Effectiveness of surveillance was measured by the proportion of outbreaks detected, and by the number of contaminated table eggs brought on the market. The boot swab program detected 90% of the outbreaks, with 75% fewer contaminated eggs compared to no surveillance, whereas the baseline egg program (30 eggs each 15 weeks) detected 86%, with 73% fewer contaminated eggs. We conclude that a larger sample size results in more detected outbreaks, whereas a smaller sampling interval decreases the number of contaminated eggs. Decreasing sample size and interval simultaneously reduces the number of contaminated eggs, but not indefinitely: the advantage of more frequent sampling is counterbalanced by the cumulative sample including less recently laid eggs. Apparently, optimizing surveillance has its limits when test specificity is taken into account. © 2011 Society for Risk Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Liwei; Qian, Yun; Zhou, Tianjun
2014-10-01
In this study, we calibrated the performance of regional climate model RegCM3 with Massachusetts Institute of Technology (MIT)-Emanuel cumulus parameterization scheme over CORDEX East Asia domain by tuning the selected seven parameters through multiple very fast simulated annealing (MVFSA) sampling method. The seven parameters were selected based on previous studies, which customized the RegCM3 with MIT-Emanuel scheme through three different ways by using the sensitivity experiments. The responses of model results to the seven parameters were investigated. Since the monthly total rainfall is constrained, the simulated spatial pattern of rainfall and the probability density function (PDF) distribution of daily rainfallmore » rates are significantly improved in the optimal simulation. Sensitivity analysis suggest that the parameter “relative humidity criteria” (RH), which has not been considered in the default simulation, has the largest effect on the model results. The responses of total rainfall over different regions to RH were examined. Positive responses of total rainfall to RH are found over northern equatorial western Pacific, which are contributed by the positive responses of explicit rainfall. Followed by an increase of RH, the increases of the low-level convergence and the associated increases in cloud water favor the increase of the explicit rainfall. The identified optimal parameters constrained by the total rainfall have positive effects on the low-level circulation and the surface air temperature. Furthermore, the optimized parameters based on the extreme case are suitable for a normal case and the model’s new version with mixed convection scheme.« less
Research to Assembly Scheme for Satellite Deck Based on Robot Flexibility Control Principle
NASA Astrophysics Data System (ADS)
Guo, Tao; Hu, Ruiqin; Xiao, Zhengyi; Zhao, Jingjing; Fang, Zhikai
2018-03-01
Deck assembly is critical quality control point in final satellite assembly process, and cable extrusion and structure collision problems in assembly process will affect development quality and progress of satellite directly. Aimed at problems existing in deck assembly process, assembly project scheme for satellite deck based on robot flexibility control principle is proposed in this paper. Scheme is introduced firstly; secondly, key technologies on end force perception and flexible docking control in the scheme are studied; then, implementation process of assembly scheme for satellite deck is described in detail; finally, actual application case of assembly scheme is given. Result shows that compared with traditional assembly scheme, assembly scheme for satellite deck based on robot flexibility control principle has obvious advantages in work efficiency, reliability and universality aspects etc.
A keyword searchable attribute-based encryption scheme with attribute update for cloud storage.
Wang, Shangping; Ye, Jian; Zhang, Yaling
2018-01-01
Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption.
A keyword searchable attribute-based encryption scheme with attribute update for cloud storage
Wang, Shangping; Zhang, Yaling
2018-01-01
Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption. PMID:29795577
Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun
2016-01-01
In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550
Esdar, Moritz; Hübner, Ursula; Liebe, Jan-David; Hüsers, Jens; Thye, Johannes
2017-01-01
Clinical information logistics is a construct that aims to describe and explain various phenomena of information provision to drive clinical processes. It can be measured by the workflow composite score, an aggregated indicator of the degree of IT support in clinical processes. This study primarily aimed to investigate the yet unknown empirical patterns constituting this construct. The second goal was to derive a data-driven weighting scheme for the constituents of the workflow composite score and to contrast this scheme with a literature based, top-down procedure. This approach should finally test the validity and robustness of the workflow composite score. Based on secondary data from 183 German hospitals, a tiered factor analytic approach (confirmatory and subsequent exploratory factor analysis) was pursued. A weighting scheme, which was based on factor loadings obtained in the analyses, was put into practice. We were able to identify five statistically significant factors of clinical information logistics that accounted for 63% of the overall variance. These factors were "flow of data and information", "mobility", "clinical decision support and patient safety", "electronic patient record" and "integration and distribution". The system of weights derived from the factor loadings resulted in values for the workflow composite score that differed only slightly from the score values that had been previously published based on a top-down approach. Our findings give insight into the internal composition of clinical information logistics both in terms of factors and weights. They also allowed us to propose a coherent model of clinical information logistics from a technical perspective that joins empirical findings with theoretical knowledge. Despite the new scheme of weights applied to the calculation of the workflow composite score, the score behaved robustly, which is yet another hint of its validity and therefore its usefulness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Experimental demonstration of spinor slow light
NASA Astrophysics Data System (ADS)
Lee, Meng-Jung; Ruseckas, Julius; Lee, Chin-Yuan; Kudriašov, Viačeslav; Chang, Kao-Fang; Cho, Hung-Wen; JuzeliÅ«nas, Gediminas; Yu, Ite A.
2016-03-01
Over the last decade there has been a continuing interest in slow and stored light based on the electromagnetically induced transparency (EIT) effect, because of their potential applications in quantum information manipulation. However, previous experimental works all dealt with the single-component slow light which cannot be employed as a qubit. In this work, we report the first experimental demonstration of two-component or spinor slow light (SSL) using a double tripod (DT) atom-light coupling scheme. The oscillations between the two components, similar to the Rabi oscillation of a two-level system or a qubit, were observed. Single-photon SSL can be considered as two-color qubits. We experimentally demonstrated a possible application of the DT scheme as quantum memory and quantum rotator for the two-color qubits. This work opens up a new direction in the slow light research.
Serological and genetic examination of some nontypical Streptococcus mutans strains.
Coykendall, A L; Bratthall, D; O'Connor, K; Dvarskas, R A
1976-09-01
Thirty-four strains of Streptococcus mutans whose antigenic or genetic positions were unclear or unknown with respect to the serological scheme of Bratthall (1970) and Perch et al. (1974), or the genetic (deoxyribonucleic acid base sequence homology) scheme of Coykendall were analyzed to clarify their relationship to previously well-characterized strains. Strain OMZ175 of the "new" serotype f was genetically homologous with strains of S. mutans subsp. mutans. Strains of the "new" serotype g were homologous with serotype d strains (S. mutans subsp. sobrinus). Strains isolated from wild rats constituted a new genetic group but carried the c antigen. Thus, strains within a "genospecies" (subspecies) of S. mutans may not always carry a unique or characteristic antigen. We suggest that the existence of multiple serotypes within subspecies represents antigenic variation and adaptations to hosts.
Parity-time-symmetry enhanced optomechanically-induced-transparency
Li, Wenlin; Jiang, Yunfeng; Li, Chong; Song, Heshan
2016-01-01
We propose and analyze a scheme to enhance optomechanically-induced-transparency (OMIT) based on parity-time-symmetric optomechanical system. Our results predict that an OMIT window which does not exist originally can appear in weak optomechanical coupling and driving system via coupling an auxiliary active cavity with optical gain. This phenomenon is quite different from these reported in previous works in which the gain is considered just to damage OMIT phenomenon even leads to electromagnetically induced absorption or inverted-OMIT. Such enhanced OMIT effects are ascribed to the additional gain which can increase photon number in cavity without reducing effective decay. We also discuss the scheme feasibility by analyzing recent experiment parameters. Our work provide a promising platform for the coherent manipulation and slow light operation, which has potential applications for quantum information processing and quantum optical device. PMID:27489193
Exploration of tetrahedral structures in silicate cathodes using a motif-network scheme
Zhao, Xin; Wu, Shunqing; Lv, Xiaobao; Nguyen, Manh Cuong; Wang, Cai-Zhuang; Lin, Zijing; Zhu, Zi-Zhong; Ho, Kai-Ming
2015-01-01
Using a motif-network search scheme, we studied the tetrahedral structures of the dilithium/disodium transition metal orthosilicates A2MSiO4 with A = Li or Na and M = Mn, Fe or Co. In addition to finding all previously reported structures, we discovered many other different tetrahedral-network-based crystal structures which are highly degenerate in energy. These structures can be classified into structures with 1D, 2D and 3D M-Si-O frameworks. A clear trend of the structural preference in different systems was revealed and possible indicators that affect the structure stabilities were introduced. For the case of Na systems which have been much less investigated in the literature relative to the Li systems, we predicted their ground state structures and found evidence for the existence of new structural motifs. PMID:26497381
Evaluation of Data Used for Modelling the Stratosphere of Saturn
NASA Astrophysics Data System (ADS)
Armstrong, Eleanor Sophie; Irwin, Patrick G. J.; Moses, Julianne I.
2015-11-01
Planetary atmospheres are modeled through the use of a photochemical and kinetic reaction scheme constructed from experimentally and theoretically determined rate coefficients, photoabsorption cross sections and branching ratios for the molecules described within them. The KINETICS architecture has previously been developed to model planetary atmospheres and is applied here to Saturn’s stratosphere. We consider the pathways that comprise the reaction scheme of a current model, and update the reaction scheme according the to findings in a literature investigation. We evaluate contemporary photochemical literature, studying recent data sets of cross-sections and branching ratios for a number of hydrocarbons used in the photochemical scheme of Model C of KINETICS. In particular evaluation of new photodissociation branching ratios for CH4, C2H2, C2H4, C3H3, C3H5 and C4H2, and new cross-sectional data for C2H2, C2H4, C2H6, C3H3, C4H2, C6H2 and C8H2 are considered. By evaluating the techniques used and data sets obtained, a new reaction scheme selection was drawn up. These data are then used within the preferred reaction scheme of the thesis and applied to the KINETICS atmospheric model to produce a model of the stratosphere of Saturn in a steady state. A total output of the preferred reaction scheme is presented, and the data is compared both with the previous reaction scheme and with data from the Cassini spacecraft in orbit around Saturn.One of the key findings of this work is that there is significant change in the model’s output as a result of temperature dependent data determination. Although only shown within the changes to the photochemical portion of the preferred reaction scheme, it is suggested that an equally important temperature dependence will be exhibited in the kinetic section of the reaction scheme. The photochemical model output is shown to be highly dependent on the preferred reaction scheme used within it by this thesis. The importance of correct and temperature-appropriate photochemical and kinetic data for the atmosphere under examination is emphasised as a consequence.
Cross stratum resources protection in fog-computing-based radio over fiber networks for 5G services
NASA Astrophysics Data System (ADS)
Guo, Shaoyong; Shao, Sujie; Wang, Yao; Yang, Hui
2017-09-01
In order to meet the requirement of internet of things (IoT) and 5G, the cloud radio access network is a paradigm which converges all base stations computational resources into a cloud baseband unit (BBU) pool, while the distributed radio frequency signals are collected by remote radio head (RRH). A precondition for centralized processing in the BBU pool is an interconnection fronthaul network with high capacity and low delay. However, it has become more complex and frequent in the interaction between RRH and BBU and resource scheduling among BBUs in cloud. Cloud radio over fiber network has been proposed in our previous work already. In order to overcome the complexity and latency, in this paper, we first present a novel cross stratum resources protection (CSRP) architecture in fog-computing-based radio over fiber networks (F-RoFN) for 5G services. Additionally, a cross stratum protection (CSP) scheme considering the network survivability is introduced in the proposed architecture. The CSRP with CSP scheme can effectively pull the remote processing resource locally to implement the cooperative radio resource management, enhance the responsiveness and resilience to the dynamic end-to-end 5G service demands, and globally optimize optical network, wireless and fog resources. The feasibility and efficiency of the proposed architecture with CSP scheme are verified on our software defined networking testbed in terms of service latency, transmission success rate, resource occupation rate and blocking probability.
The Portrayal of Ethnic Chinese/Japanese Peoples in Britain's Primary Reading Schemes.
ERIC Educational Resources Information Center
Rice, Ian Stratton
1988-01-01
Reports an expansion of a previous study of the portrayal of Blacks in the 10 most commonly used reading schemes in the primary schools of a large industrial city in Great Britain. Found that ethnic Chinese and Japanese were underrepresented and portrayed in an ignorant and patronizing manner. (ARH)
Quantum state sharing against the controller's cheating
NASA Astrophysics Data System (ADS)
Shi, Run-hua; Zhong, Hong; Huang, Liu-sheng
2013-08-01
Most existing QSTS schemes are equivalent to the controlled teleportation, in which a designated agent (i.e., the recoverer) can recover the teleported state with the help of the controllers. However, the controller may attempt to cheat the recoverer during the phase of recovering the secret state. How can we detect this cheating? In this paper, we considered the problem of detecting the controller's cheating in Quantum State Sharing, and further proposed an effective Quantum State Sharing scheme against the controller's cheating. We cleverly use Quantum Secret Sharing, Multiple Quantum States Sharing and decoy-particle techniques. In our scheme, via a previously shared entanglement state Alice can teleport multiple arbitrary multi-qubit states to Bob with the help of Charlie. Furthermore, by the classical information shared previously, Alice and Bob can check whether there is any cheating of Charlie. In addition, our scheme only needs to perform Bell-state and single-particle measurements, and to apply C-NOT gate and other single-particle unitary operations. With the present techniques, it is feasible to implement these necessary measurements and operations.
NASA Astrophysics Data System (ADS)
Liu, Jian; Ruan, Xiaoe
2017-07-01
This paper develops two kinds of derivative-type networked iterative learning control (NILC) schemes for repetitive discrete-time systems with stochastic communication delay occurred in input and output channels and modelled as 0-1 Bernoulli-type stochastic variable. In the two schemes, the delayed signal of the current control input is replaced by the synchronous input utilised at the previous iteration, whilst for the delayed signal of the system output the one scheme substitutes it by the synchronous predetermined desired trajectory and the other takes it by the synchronous output at the previous operation, respectively. In virtue of the mathematical expectation, the tracking performance is analysed which exhibits that for both the linear time-invariant and nonlinear affine systems the two kinds of NILCs are convergent under the assumptions that the probabilities of communication delays are adequately constrained and the product of the input-output coupling matrices is full-column rank. Last, two illustrative examples are presented to demonstrate the effectiveness and validity of the proposed NILC schemes.
Free-Space Quantum Signatures Using Heterodyne Measurements.
Croal, Callum; Peuntinger, Christian; Heim, Bettina; Khan, Imran; Marquardt, Christoph; Leuchs, Gerd; Wallden, Petros; Andersson, Erika; Korolkova, Natalia
2016-09-02
Digital signatures guarantee the authorship of electronic communications. Currently used "classical" signature schemes rely on unproven computational assumptions for security, while quantum signatures rely only on the laws of quantum mechanics to sign a classical message. Previous quantum signature schemes have used unambiguous quantum measurements. Such measurements, however, sometimes give no result, reducing the efficiency of the protocol. Here, we instead use heterodyne detection, which always gives a result, although there is always some uncertainty. We experimentally demonstrate feasibility in a real environment by distributing signature states through a noisy 1.6 km free-space channel. Our results show that continuous-variable heterodyne detection improves the signature rate for this type of scheme and therefore represents an interesting direction in the search for practical quantum signature schemes. For transmission values ranging from 100% to 10%, but otherwise assuming an ideal implementation with no other imperfections, the signature length is shorter by a factor of 2 to 10. As compared with previous relevant experimental realizations, the signature length in this implementation is several orders of magnitude shorter.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
Detecting Pulsing Denial-of-Service Attacks with Nondeterministic Attack Intervals
NASA Astrophysics Data System (ADS)
Luo, Xiapu; Chan, Edmond W. W.; Chang, Rocky K. C.
2009-12-01
This paper addresses the important problem of detecting pulsing denial of service (PDoS) attacks which send a sequence of attack pulses to reduce TCP throughput. Unlike previous works which focused on a restricted form of attacks, we consider a very broad class of attacks. In particular, our attack model admits any attack interval between two adjacent pulses, whether deterministic or not. It also includes the traditional flooding-based attacks as a limiting case (i.e., zero attack interval). Our main contribution is Vanguard, a new anomaly-based detection scheme for this class of PDoS attacks. The Vanguard detection is based on three traffic anomalies induced by the attacks, and it detects them using a CUSUM algorithm. We have prototyped Vanguard and evaluated it on a testbed. The experiment results show that Vanguard is more effective than the previous methods that are based on other traffic anomalies (after a transformation using wavelet transform, Fourier transform, and autocorrelation) and detection algorithms (e.g., dynamic time warping).
An improved biometrics-based authentication scheme for telecare medical information systems.
Guo, Dianli; Wen, Qiaoyan; Li, Wenmin; Zhang, Hua; Jin, Zhengping
2015-03-01
Telecare medical information system (TMIS) offers healthcare delivery services and patients can acquire their desired medical services conveniently through public networks. The protection of patients' privacy and data confidentiality are significant. Very recently, Mishra et al. proposed a biometrics-based authentication scheme for telecare medical information system. Their scheme can protect user privacy and is believed to resist a range of network attacks. In this paper, we analyze Mishra et al.'s scheme and identify that their scheme is insecure to against known session key attack and impersonation attack. Thereby, we present a modified biometrics-based authentication scheme for TMIS to eliminate the aforementioned faults. Besides, we demonstrate the completeness of the proposed scheme through BAN-logic. Compared to the related schemes, our protocol can provide stronger security and it is more practical.
Lommen, Jonathan M; Flassbeck, Sebastian; Behl, Nicolas G R; Niesporek, Sebastian; Bachert, Peter; Ladd, Mark E; Nagel, Armin M
2018-08-01
To investigate and to reduce influences on the determination of the short and long apparent transverse relaxation times ( T2,s*, T2,l*) of 23 Na in vivo with respect to signal sampling. The accuracy of T2* determination was analyzed in simulations for five different sampling schemes. The influence of noise in the parameter fit was investigated for three different models. A dedicated sampling scheme was developed for brain parenchyma by numerically optimizing the parameter estimation. This scheme was compared in vivo to linear sampling at 7T. For the considered sampling schemes, T2,s* / T2,l* exhibit an average bias of 3% / 4% with a variation of 25% / 15% based on simulations with previously published T2* values. The accuracy could be improved with the optimized sampling scheme by strongly averaging the earliest sample. A fitting model with constant noise floor can increase accuracy while additional fitting of a noise term is only beneficial in case of sampling until late echo time > 80 ms. T2* values in white matter were determined to be T2,s* = 5.1 ± 0.8 / 4.2 ± 0.4 ms and T2,l* = 35.7 ± 2.4 / 34.4 ± 1.5 ms using linear/optimized sampling. Voxel-wise T2* determination of 23 Na is feasible in vivo. However, sampling and fitting methods have to be chosen carefully to retrieve accurate results. Magn Reson Med 80:571-584, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Saeb Gilani, T.; Villringer, C.; Zhang, E.; Gundlach, H.; Buchmann, J.; Schrader, S.; Laufer, J.
2018-02-01
Tomographic photoacoustic (PA) images acquired using a Fabry-Perot (FP) based scanner offer high resolution and image fidelity but can result in long acquisition times due to the need for raster scanning. To reduce the acquisition times, a parallelised camera-based PA signal detection scheme is developed. The scheme is based on using a sCMOScamera and FPI sensors with high homogeneity of optical thickness. PA signals were acquired using the camera-based setup and the signal to noise ratio (SNR) was measured. A comparison of the SNR of PA signal detected using 1) a photodiode in a conventional raster scanning detection scheme and 2) a sCMOS camera in parallelised detection scheme is made. The results show that the parallelised interrogation scheme has the potential to provide high speed PA imaging.
NASA Astrophysics Data System (ADS)
Guo, Kai; Xie, Yongjie; Ye, Hu; Zhang, Song; Li, Yunfei
2018-04-01
Due to the uncertainty of stratospheric airship's shape and the security problem caused by the uncertainty, surface reconstruction and surface deformation monitoring of airship was conducted based on laser scanning technology and a √3-subdivision scheme based on Shepard interpolation was developed. Then, comparison was conducted between our subdivision scheme and the original √3-subdivision scheme. The result shows our subdivision scheme could reduce the shrinkage of surface and the number of narrow triangles. In addition, our subdivision scheme could keep the sharp features. So, surface reconstruction and surface deformation monitoring of airship could be conducted precisely by our subdivision scheme.
NASA Astrophysics Data System (ADS)
Mattoon, C. M.; Sarazin, F.; Hackman, G.; Cunningham, E. S.; Austin, R. A. E.; Ball, G. C.; Chakrawarthy, R. S.; Finlay, P.; Garrett, P. E.; Grinyer, G. F.; Hyland, B.; Koopmans, K. A.; Leslie, J. R.; Phillips, A. A.; Schumaker, M. A.; Scraggs, H. C.; Schwarzenberg, J.; Smith, M. B.; Svensson, C. E.; Waddington, J. C.; Walker, P. M.; Washbrook, B.; Zganjar, E.
2007-01-01
The β-decay of Na32 has been studied using β-γ coincidences. New transitions and levels are tentatively placed in the level scheme of Mg32 from an analysis of γ-γ and β-γ-γ coincidences. The observation of the indirect feeding of the 2321 keV state in Mg32 removes some restrictions previously placed on the spin assignment for this state. No evidence of a state at 2117 keV in Mg32 is found. Previously unobserved weak transitions up to 5.4 MeV were recorded but could not be placed in the decay scheme of Na32.
Improved nearest codeword search scheme using a tighter kick-out condition
NASA Astrophysics Data System (ADS)
Hwang, Kuo-Feng; Chang, Chin-Chen
2001-09-01
Using a tighter kick-out condition as a faster approach to nearest codeword searches is proposed. The proposed scheme finds the nearest codeword that is identical to the one found using a full search. However, using our scheme, the search time is much shorter. Our scheme first establishes a tighter kick-out condition. Then, the temporal nearest codeword can be obtained from the codewords that survive the tighter condition. Finally, the temporal nearest codeword cooperatives with the query vector to constitute a better kick-out condition. In other words, more codewords can be excluded without actually computing the distances between the bypassed codewords and the query vector. Comparison to previous work are included to present the benefits of the proposed scheme in relation to search time.
An Identity-Based Anti-Quantum Privacy-Preserving Blind Authentication in Wireless Sensor Networks.
Zhu, Hongfei; Tan, Yu-An; Zhu, Liehuang; Wang, Xianmin; Zhang, Quanxin; Li, Yuanzhang
2018-05-22
With the development of wireless sensor networks, IoT devices are crucial for the Smart City; these devices change people's lives such as e-payment and e-voting systems. However, in these two systems, the state-of-art authentication protocols based on traditional number theory cannot defeat a quantum computer attack. In order to protect user privacy and guarantee trustworthy of big data, we propose a new identity-based blind signature scheme based on number theorem research unit lattice, this scheme mainly uses a rejection sampling theorem instead of constructing a trapdoor. Meanwhile, this scheme does not depend on complex public key infrastructure and can resist quantum computer attack. Then we design an e-payment protocol using the proposed scheme. Furthermore, we prove our scheme is secure in the random oracle, and satisfies confidentiality, integrity, and non-repudiation. Finally, we demonstrate that the proposed scheme outperforms the other traditional existing identity-based blind signature schemes in signing speed and verification speed, outperforms the other lattice-based blind signature in signing speed, verification speed, and signing secret key size.
An Identity-Based Anti-Quantum Privacy-Preserving Blind Authentication in Wireless Sensor Networks
Zhu, Hongfei; Tan, Yu-an; Zhu, Liehuang; Wang, Xianmin; Zhang, Quanxin; Li, Yuanzhang
2018-01-01
With the development of wireless sensor networks, IoT devices are crucial for the Smart City; these devices change people’s lives such as e-payment and e-voting systems. However, in these two systems, the state-of-art authentication protocols based on traditional number theory cannot defeat a quantum computer attack. In order to protect user privacy and guarantee trustworthy of big data, we propose a new identity-based blind signature scheme based on number theorem research unit lattice, this scheme mainly uses a rejection sampling theorem instead of constructing a trapdoor. Meanwhile, this scheme does not depend on complex public key infrastructure and can resist quantum computer attack. Then we design an e-payment protocol using the proposed scheme. Furthermore, we prove our scheme is secure in the random oracle, and satisfies confidentiality, integrity, and non-repudiation. Finally, we demonstrate that the proposed scheme outperforms the other traditional existing identity-based blind signature schemes in signing speed and verification speed, outperforms the other lattice-based blind signature in signing speed, verification speed, and signing secret key size. PMID:29789475
NASA Astrophysics Data System (ADS)
Park, Sang Cheol; Zheng, Bin; Wang, Xiao-Hui; Gur, David
2008-03-01
Digital breast tomosynthesis (DBT) has emerged as a promising imaging modality for screening mammography. However, visually detecting micro-calcification clusters depicted on DBT images is a difficult task. Computer-aided detection (CAD) schemes for detecting micro-calcification clusters depicted on mammograms can achieve high performance and the use of CAD results can assist radiologists in detecting subtle micro-calcification clusters. In this study, we compared the performance of an available 2D based CAD scheme with one that includes a new grouping and scoring method when applied to both projection and reconstructed DBT images. We selected a dataset involving 96 DBT examinations acquired on 45 women. Each DBT image set included 11 low dose projection images and a varying number of reconstructed image slices ranging from 18 to 87. In this dataset 20 true-positive micro-calcification clusters were visually detected on the projection images and 40 were visually detected on the reconstructed images, respectively. We first applied the CAD scheme that was previously developed in our laboratory to the DBT dataset. We then tested a new grouping method that defines an independent cluster by grouping the same cluster detected on different projection or reconstructed images. We then compared four scoring methods to assess the CAD performance. The maximum sensitivity level observed for the different grouping and scoring methods were 70% and 88% for the projection and reconstructed images with a maximum false-positive rate of 4.0 and 15.9 per examination, respectively. This preliminary study demonstrates that (1) among the maximum, the minimum or the average CAD generated scores, using the maximum score of the grouped cluster regions achieved the highest performance level, (2) the histogram based scoring method is reasonably effective in reducing false-positive detections on the projection images but the overall CAD sensitivity is lower due to lower signal-to-noise ratio, and (3) CAD achieved higher sensitivity and higher false-positive rate (per examination) on the reconstructed images. We concluded that without changing the detection threshold or performing pre-filtering to possibly increase detection sensitivity, current CAD schemes developed and optimized for 2D mammograms perform relatively poorly and need to be re-optimized using DBT datasets and new grouping and scoring methods need to be incorporated into the schemes if these are to be used on the DBT examinations.
Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.
2004-01-01
The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux derivatives and a fourth-order Runge-Kutta method are denoted.
Yang, Hui; He, Yongqi; Zhang, Jie; Ji, Yuefeng; Bai, Wei; Lee, Young
2016-04-18
Cloud radio access network (C-RAN) has become a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing using cloud BBUs. In our previous work, we implemented cross stratum optimization of optical network and application stratums resources that allows to accommodate the services in optical networks. In view of this, this study extends to consider the multiple dimensional resources optimization of radio, optical and BBU processing in 5G age. We propose a novel multi-stratum resources optimization (MSRO) architecture with network functions virtualization for cloud-based radio over optical fiber networks (C-RoFN) using software defined control. A global evaluation scheme (GES) for MSRO in C-RoFN is introduced based on the proposed architecture. The MSRO can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical and BBU resources effectively to maximize radio coverage. The efficiency and feasibility of the proposed architecture are experimentally demonstrated on OpenFlow-based enhanced SDN testbed. The performance of GES under heavy traffic load scenario is also quantitatively evaluated based on MSRO architecture in terms of resource occupation rate and path provisioning latency, compared with other provisioning scheme.
Error function attack of chaos synchronization based encryption schemes.
Wang, Xingang; Zhan, Meng; Lai, C-H; Gang, Hu
2004-03-01
Different chaos synchronization based encryption schemes are reviewed and compared from the practical point of view. As an efficient cryptanalysis tool for chaos encryption, a proposal based on the error function attack is presented systematically and used to evaluate system security. We define a quantitative measure (quality factor) of the effective applicability of a chaos encryption scheme, which takes into account the security, the encryption speed, and the robustness against channel noise. A comparison is made of several encryption schemes and it is found that a scheme based on one-way coupled chaotic map lattices performs outstandingly well, as judged from quality factor. Copyright 2004 American Institute of Physics.
An, Younghwa
2012-01-01
Recently, many biometrics-based user authentication schemes using smart cards have been proposed to improve the security weaknesses in user authentication system. In 2011, Das proposed an efficient biometric-based remote user authentication scheme using smart cards that can provide strong authentication and mutual authentication. In this paper, we analyze the security of Das's authentication scheme, and we have shown that Das's authentication scheme is still insecure against the various attacks. Also, we proposed the enhanced scheme to remove these security problems of Das's authentication scheme, even if the secret information stored in the smart card is revealed to an attacker. As a result of security analysis, we can see that the enhanced scheme is secure against the user impersonation attack, the server masquerading attack, the password guessing attack, and the insider attack and provides mutual authentication between the user and the server.
An, Younghwa
2012-01-01
Recently, many biometrics-based user authentication schemes using smart cards have been proposed to improve the security weaknesses in user authentication system. In 2011, Das proposed an efficient biometric-based remote user authentication scheme using smart cards that can provide strong authentication and mutual authentication. In this paper, we analyze the security of Das's authentication scheme, and we have shown that Das's authentication scheme is still insecure against the various attacks. Also, we proposed the enhanced scheme to remove these security problems of Das's authentication scheme, even if the secret information stored in the smart card is revealed to an attacker. As a result of security analysis, we can see that the enhanced scheme is secure against the user impersonation attack, the server masquerading attack, the password guessing attack, and the insider attack and provides mutual authentication between the user and the server. PMID:22899887
Lee, Tian-Fu
2013-12-01
A smartcard-based authentication and key agreement scheme for telecare medicine information systems enables patients, doctors, nurses and health visitors to use smartcards for secure login to medical information systems. Authorized users can then efficiently access remote services provided by the medicine information systems through public networks. Guo and Chang recently improved the efficiency of a smartcard authentication and key agreement scheme by using chaotic maps. Later, Hao et al. reported that the scheme developed by Guo and Chang had two weaknesses: inability to provide anonymity and inefficient double secrets. Therefore, Hao et al. proposed an authentication scheme for telecare medicine information systems that solved these weaknesses and improved performance. However, a limitation in both schemes is their violation of the contributory property of key agreements. This investigation discusses these weaknesses and proposes a new smartcard-based authentication and key agreement scheme that uses chaotic maps for telecare medicine information systems. Compared to conventional schemes, the proposed scheme provides fewer weaknesses, better security, and more efficiency.
Simulating Self-Assembly with Simple Models
NASA Astrophysics Data System (ADS)
Rapaport, D. C.
Results from recent molecular dynamics simulations of virus capsid self-assembly are described. The model is based on rigid trapezoidal particles designed to form polyhedral shells of size 60, together with an atomistic solvent. The underlying bonding process is fully reversible. More extensive computations are required than in previous work on icosahedral shells built from triangular particles, but the outcome is a high yield of closed shells. Intermediate clusters have a variety of forms, and bond counts provide a useful classification scheme
Coding Instead of Splitting - Algebraic Combinations in Time and Space
2016-06-09
sources message. For certain classes of two-unicast-Z networks, we show that the rate-tuple ( N ,1) is achievable as long as the individual source...destination cuts for the two source-destination pairs are respectively at least as large as N and 1, and the generalized network sharing cut - a bound...previously defined by Kamath et. al. - is at least as large as N + 1. We show this through a novel achievable scheme which is based on random linear coding at
Three-dimensional marginal separation
NASA Technical Reports Server (NTRS)
Duck, Peter W.
1988-01-01
The three dimensional marginal separation of a boundary layer along a line of symmetry is considered. The key equation governing the displacement function is derived, and found to be a nonlinear integral equation in two space variables. This is solved iteratively using a pseudo-spectral approach, based partly in double Fourier space, and partly in physical space. Qualitatively, the results are similar to previously reported two dimensional results (which are also computed to test the accuracy of the numerical scheme); however quantitatively the three dimensional results are much different.
Optimal wavelets for biomedical signal compression.
Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario
2006-07-01
Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.
NASA Astrophysics Data System (ADS)
de Smet, Jeroen H.; van den Berg, Arie P.; Vlaar, Nico J.; Yuen, David A.
2000-03-01
Purely advective transport of composition is of major importance in the Geosciences, and efficient and accurate solution methods are needed. A characteristics-based method is used to solve the transport equation. We employ a new hybrid interpolation scheme, which allows for the tuning of stability and accuracy through a threshold parameter ɛth. Stability is established by bilinear interpolations, and bicubic splines are used to maintain accuracy. With this scheme, numerical instabilities can be suppressed by allowing numerical diffusion to work in time and locally in space. The scheme can be applied efficiently for preliminary modelling purposes. This can be followed by detailed high-resolution experiments. First, the principal effects of this hybrid interpolation method are illustrated and some tests are presented for numerical solutions of the transport equation. Second, we illustrate that this approach works successfully for a previously developed continental evolution model for the convecting upper mantle. In this model the transport equation contains a source term, which describes the melt production in pressure-released partial melting. In this model, a characteristic phenomenon of small-scale melting diapirs is observed (De Smet et al.1998; De Smet et al. 1999). High-resolution experiments with grid cells down to 700m horizontally and 515m vertically result in highly detailed observations of the diapiric melting phenomenon.
Winge, Per; El Assimi, Aimen; Jouhet, Juliette; Vadstein, Olav
2017-01-01
Molecular mechanisms of phosphorus (P) limitation are of great interest for understanding algal production in aquatic ecosystems. Previous studies point to P limitation-induced changes in lipid composition. As, in microalgae, the molecular mechanisms of this specific P stress adaptation remain unresolved, we reveal a detailed phospholipid-recycling scheme in Nannochloropsis oceanica and describe important P acquisition genes based on highly corresponding transcriptome and lipidome data. Initial responses to P limitation showed increased expression of genes involved in P uptake and an expansion of the P substrate spectrum based on purple acid phosphatases. Increase in P trafficking displayed a rearrangement between compartments by supplying P to the chloroplast and carbon to the cytosol for lipid synthesis. We propose a novel phospholipid-recycling scheme for algae that leads to the rapid reduction of phospholipids and synthesis of the P-free lipid classes. P mobilization through membrane lipid degradation is mediated mainly by two glycerophosphoryldiester phosphodiesterases and three patatin-like phospholipases A on the transcriptome level. To compensate for low phospholipids in exponential growth, N. oceanica synthesized sulfoquinovosyldiacylglycerol and diacylglyceroltrimethylhomoserine. In this study, it was shown that an N. oceanica strain has a unique repertoire of genes that facilitate P acquisition and the degradation of phospholipids compared with other stramenopiles. The novel phospholipid-recycling scheme opens new avenues for metabolic engineering of lipid composition in algae. PMID:29051196
A country-wide probability sample of public attitudes toward stuttering in Portugal.
Valente, Ana Rita S; St Louis, Kenneth O; Leahy, Margaret; Hall, Andreia; Jesus, Luis M T
2017-06-01
Negative public attitudes toward stuttering have been widely reported, although differences among countries and regions exist. Clear reasons for these differences remain obscure. Published research is unavailable on public attitudes toward stuttering in Portugal as well as a representative sample that explores stuttering attitudes in an entire country. This study sought to (a) determine the feasibility of a country-wide probability sampling scheme to measure public stuttering attitudes in Portugal using a standard instrument (the Public Opinion Survey of Human Attributes-Stuttering [POSHA-S]) and (b) identify demographic variables that predict Portuguese attitudes. The POSHA-S was translated to European Portuguese through a five-step process. Thereafter, a local administrative office-based, three-stage, cluster, probability sampling scheme was carried out to obtain 311 adult respondents who filled out the questionnaire. The Portuguese population held stuttering attitudes that were generally within the average range of those observed from numerous previous POSHA-S samples. Demographic variables that predicted more versus less positive stuttering attitudes were respondents' age, region of the country, years of school completed, working situation, and number of languages spoken. Non-predicting variables were respondents' sex, marital status, and parental status. A local administrative office-based, probability sampling scheme generated a respondent profile similar to census data and indicated that Portuguese attitudes are generally typical. Copyright © 2017 Elsevier Inc. All rights reserved.
Lightweight ECC based RFID authentication integrated with an ID verifier transfer protocol.
He, Debiao; Kumar, Neeraj; Chilamkurti, Naveen; Lee, Jong-Hyouk
2014-10-01
The radio frequency identification (RFID) technology has been widely adopted and being deployed as a dominant identification technology in a health care domain such as medical information authentication, patient tracking, blood transfusion medicine, etc. With more and more stringent security and privacy requirements to RFID based authentication schemes, elliptic curve cryptography (ECC) based RFID authentication schemes have been proposed to meet the requirements. However, many recently published ECC based RFID authentication schemes have serious security weaknesses. In this paper, we propose a new ECC based RFID authentication integrated with an ID verifier transfer protocol that overcomes the weaknesses of the existing schemes. A comprehensive security analysis has been conducted to show strong security properties that are provided from the proposed authentication scheme. Moreover, the performance of the proposed authentication scheme is analyzed in terms of computational cost, communicational cost, and storage requirement.
Searchable attribute-based encryption scheme with attribute revocation in cloud storage.
Wang, Shangping; Zhao, Duqiao; Zhang, Yaling
2017-01-01
Attribute based encryption (ABE) is a good way to achieve flexible and secure access control to data, and attribute revocation is the extension of the attribute-based encryption, and the keyword search is an indispensable part for cloud storage. The combination of both has an important application in the cloud storage. In this paper, we construct a searchable attribute-based encryption scheme with attribute revocation in cloud storage, the keyword search in our scheme is attribute based with access control, when the search succeeds, the cloud server returns the corresponding cipher text to user and the user can decrypt the cipher text definitely. Besides, our scheme supports multiple keywords search, which makes the scheme more practical. Under the assumption of decisional bilinear Diffie-Hellman exponent (q-BDHE) and decisional Diffie-Hellman (DDH) in the selective security model, we prove that our scheme is secure.
An Indirect Data Assimilation Scheme for Deep Soil Temperature in the Pleim-Xiu Land Surface Model
The Pleim-Xiu land surface model (PX LSM) has been improved by the addition of a 2nd indirect data assimilation scheme. The first, which was described previously, is a technique where soil moisture in nudged according to the biases in 2-m air temperature and relative humidity be...
Simple scheme to implement decoy-state reference-frame-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Zhang, Chunmei; Zhu, Jianrong; Wang, Qin
2018-06-01
We propose a simple scheme to implement decoy-state reference-frame-independent quantum key distribution (RFI-QKD), where signal states are prepared in Z, X, and Y bases, decoy states are prepared in X and Y bases, and vacuum states are set to no bases. Different from the original decoy-state RFI-QKD scheme whose decoy states are prepared in Z, X and Y bases, in our scheme decoy states are only prepared in X and Y bases, which avoids the redundancy of decoy states in Z basis, saves the random number consumption, simplifies the encoding device of practical RFI-QKD systems, and makes the most of the finite pulses in a short time. Numerical simulations show that, considering the finite size effect with reasonable number of pulses in practical scenarios, our simple decoy-state RFI-QKD scheme exhibits at least comparable or even better performance than that of the original decoy-state RFI-QKD scheme. Especially, in terms of the resistance to the relative rotation of reference frames, our proposed scheme behaves much better than the original scheme, which has great potential to be adopted in current QKD systems.
ERIC Educational Resources Information Center
Kis, Viktoria
2016-01-01
Realising the potential of work-based learning schemes as a driver of productivity requires careful design and support. The length of work-based learning schemes should be adapted to the profile of productivity gains. A scheme that is too long for a given skill set might be unattractive for learners and waste public resources, but a scheme that is…
Mishra, Dheerendra; Mukhopadhyay, Sourav; Chaturvedi, Ankita; Kumari, Saru; Khan, Muhammad Khurram
2014-06-01
Remote user authentication is desirable for a Telecare Medicine Information System (TMIS) for the safety, security and integrity of transmitted data over the public channel. In 2013, Tan presented a biometric based remote user authentication scheme and claimed that his scheme is secure. Recently, Yan et al. demonstrated some drawbacks in Tan's scheme and proposed an improved scheme to erase the drawbacks of Tan's scheme. We analyze Yan et al.'s scheme and identify that their scheme is vulnerable to off-line password guessing attack, and does not protect anonymity. Moreover, in their scheme, login and password change phases are inefficient to identify the correctness of input where inefficiency in password change phase can cause denial of service attack. Further, we design an improved scheme for TMIS with the aim to eliminate the drawbacks of Yan et al.'s scheme.
SU-E-J-15: A Patient-Centered Scheme to Mitigate Impacts of Treatment Setup Error
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, L; Southern Medical University, Guangzhou; Tian, Z
2014-06-01
Purpose: Current Intensity Modulated Radiation Therapy (IMRT) is plan-centered. At each treatment fraction, we position the patient to match the setup in treatment plan. Inaccurate setup can compromise delivered dose distribution, and hence leading to suboptimal treatments. Moreover, current setup approach via couch shift under image guidance can correct translational errors, while rotational and deformation errors are hard to address. To overcome these problems, we propose in this abstract a patient-centered scheme to mitigate impacts of treatment setup errors. Methods: In the patient-centered scheme, we first position the patient on the couch approximately matching the planned-setup. Our Supercomputing Online Replanningmore » Environment (SCORE) is then employed to design an optimal treatment plan based on the daily patient geometry. It hence mitigates the impacts of treatment setup error and reduces the requirements on setup accuracy. We have conducted simulations studies in 10 head-and-neck (HN) patients to investigate the feasibility of this scheme. Rotational and deformation setup errors were simulated. Specifically, 1, 3, 5, 7 degrees of rotations were put on pitch, roll, and yaw directions; deformation errors were simulated by splitting neck movements into four basic types: rotation, lateral bending, flexion and extension. Setup variation ranges are based on observed numbers in previous studies. Dosimetric impacts of our scheme were evaluated on PTVs and OARs in comparison with original plan dose with original geometry and original plan recalculated dose with new setup geometries. Results: With conventional plan-centered approach, setup error could lead to significant PTV D99 decrease (−0.25∼+32.42%) and contralateral-parotid Dmean increase (−35.09∼+42.90%). The patientcentered approach is effective in mitigating such impacts to 0∼+0.20% and −0.03∼+5.01%, respectively. Computation time is <128 s. Conclusion: Patient-centered scheme is proposed to mitigate setup error impacts using replanning. Its superiority in terms of dosimetric impacts and feasibility has been shown through simulation studies on HN cases.« less
Active identification and control of aerodynamic instabilities in axial and centrifugal compressors
NASA Astrophysics Data System (ADS)
Krichene, Assad
In this thesis, it is experimentally shown that dynamic cursors to stall and surge exist in both axial and centrifugal compressors using the experimental axial and centrifugal compressor rigs located in the School of Aerospace Engineering at the Georgia Institute of Technology. Further, it is shown that the dynamic cursors to stall and surge can be identified in real-time and they can be used in a simple control scheme to avoid the occurrence of stall and surge instabilities altogether. For the centrifugal compressor, a previously developed real-time observer is used in order to detect dynamic cursors to surge in real-time. An off-line analysis using the Fast Fourier Transform (FFT) of the open loop experimental data from the centrifugal compressor rig is carried out to establish the influence of compressor speed on the dynamic cursor frequency. The variation of the amplitude of dynamic cursors with compressor operating condition from experimental data is qualitatively compared with simulation results obtained using a generic compression system model subjected to white noise excitation. Using off-line analysis results, a simple control scheme based on fuzzy logic is synthesized for surge avoidance and recovery. The control scheme is implemented in the centrifugal compressor rig using compressor bleed as well as fuel flow to the combustor. Closed loop experimental results are obtained to demonstrate the effectiveness of the controller for both surge avoidance and surge recovery. The existence of stall cursors in an axial compression system is established using the observer scheme from off-line analysis of an existing database of a commercial gas turbine engine. However, the observer scheme is found to be ineffective in detecting stall cursors in the experimental axial compressor rig in the School of Aerospace Engineering at the Georgia Institute of Technology. An alternate scheme based on the amplitude of pressure data content at the blade passage frequency obtained using a pressure sensor located (in the casing) over the blade row is developed and used in the axial compressor rig for stall and surge avoidance and recovery. (Abstract shortened by UMI.)
Wen, Fengtong
2013-12-01
User authentication plays an important role to protect resources or services from being accessed by unauthorized users. In a recent paper, Das et al. proposed a secure and efficient uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care. This scheme uses three factors, e.g. biometrics, password, and smart card, to protect the security. It protects user privacy and is believed to have many abilities to resist a range of network attacks, even if the secret information stored in the smart card is compromised. In this paper, we analyze the security of Das et al.'s scheme, and show that the scheme is in fact insecure against the replay attack, user impersonation attacks and off-line guessing attacks. Then, we also propose a robust uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care. Compared with the existing schemes, our protocol uses a different user authentication mechanism to resist replay attack. We show that our proposed scheme can provide stronger security than previous protocols. Furthermore, we demonstrate the validity of the proposed scheme through the BAN (Burrows, Abadi, and Needham) logic.
Moon, Jongho; Choi, Younsung; Kim, Jiye; Won, Dongho
2016-03-01
Recently, numerous extended chaotic map-based password authentication schemes that employ smart card technology were proposed for Telecare Medical Information Systems (TMISs). In 2015, Lu et al. used Li et al.'s scheme as a basis to propose a password authentication scheme for TMISs that is based on biometrics and smart card technology and employs extended chaotic maps. Lu et al. demonstrated that Li et al.'s scheme comprises some weaknesses such as those regarding a violation of the session-key security, a vulnerability to the user impersonation attack, and a lack of local verification. In this paper, however, we show that Lu et al.'s scheme is still insecure with respect to issues such as a violation of the session-key security, and that it is vulnerable to both the outsider attack and the impersonation attack. To overcome these drawbacks, we retain the useful properties of Lu et al.'s scheme to propose a new password authentication scheme that is based on smart card technology and requires the use of chaotic maps. Then, we show that our proposed scheme is more secure and efficient and supports security properties.
Lee, Sun Mi; Katz, Matthew H G; Liu, Li; Sundar, Manonmani; Wang, Hua; Varadhachary, Gauri R; Wolff, Robert A; Lee, Jeffrey E; Maitra, Anirban; Fleming, Jason B; Rashid, Asif; Wang, Huamin
2016-12-01
Neoadjuvant therapy has been increasingly used to treat patients with potentially resectable pancreatic ductal adenocarcinoma (PDAC). Although the College of American Pathologists (CAP) grading scheme for tumor response in posttherapy specimens has been used, its clinical significance has not been validated. Previously, we proposed a 3-tier histologic tumor regression grading (HTRG) scheme (HTRG 0, no viable tumor; HTRG 1, <5% viable tumor cells; HTRG 2, ≥5% viable tumor cells) and showed that the 3-tier HTRG scheme correlated with prognosis. In this study, we sought to validate our proposed HTRG scheme in a new cohort of 167 consecutive PDAC patients who completed neoadjuvant therapy and pancreaticoduodenectomy. We found that patients with HTRG 0 or 1 were associated with a lower frequency of lymph node metastasis (P=0.004) and recurrence (P=0.01), lower ypT (P<0.001) and AJCC stage (P<0.001), longer disease-free survival (DFS, P=0.004) and overall survival (OS, P=0.02) than those with HTRG 2. However, there was no difference in either DFS or OS between the groups with CAP grade 2 and those with CAP grade 3 (P>0.05). In multivariate analysis, HTRG grade 0 or 1 was an independent prognostic factor for better DFS (P=0.03), but not OS. Therefore we validated the proposed HTRG scheme from our previous study. The proposed HTRG scheme is simple and easy to apply in practice by pathologists and might be used as a successful surrogate for longer DFS in patients with potentially resectable PDAC who completed neoadjuvant therapy and surgery.
NASA Astrophysics Data System (ADS)
Luhar, Ashok K.; Galbally, Ian E.; Woodhouse, Matthew T.; Thatcher, Marcus
2017-03-01
Schemes used to parameterise ozone dry deposition velocity at the oceanic surface mainly differ in terms of how the dominant term of surface resistance is parameterised. We examine three such schemes and test them in a global climate-chemistry model that incorporates meteorological nudging and monthly-varying reactive-gas emissions. The default scheme invokes the commonly used assumption that the water surface resistance is constant. The other two schemes, named the one-layer and two-layer reactivity schemes, include the simultaneous influence on the water surface resistance of ozone solubility in water, waterside molecular diffusion and turbulent transfer, and a first-order chemical reaction of ozone with dissolved iodide. Unlike the one-layer scheme, the two-layer scheme can indirectly control the degree of interaction between chemical reaction and turbulent transfer through the specification of a surface reactive layer thickness. A comparison is made of the modelled deposition velocity dependencies on sea surface temperature (SST) and wind speed with recently reported cruise-based observations. The default scheme overestimates the observed deposition velocities by a factor of 2-4 when the chemical reaction is slow (e.g. under colder SSTs in the Southern Ocean). The default scheme has almost no temperature, wind speed, or latitudinal variations in contrast with the observations. The one-layer scheme provides noticeably better variations, but it overestimates deposition velocity by a factor of 2-3 due to an enhancement of the interaction between chemical reaction and turbulent transfer. The two-layer scheme with a surface reactive layer thickness specification of 2.5 µm, which is approximately equal to the reaction-diffusive length scale of the ozone-iodide reaction, is able to simulate the field measurements most closely with respect to absolute values as well as SST and wind-speed dependence. The annual global oceanic deposition of ozone determined using this scheme is approximately half of the original oceanic deposition obtained using the default scheme, and it corresponds to a 10 % decrease in the original estimate of the total global ozone deposition. The previously reported modelled estimate of oceanic deposition is roughly one-third of total deposition and with this new parameterisation it is reduced to 12 % of the modelled total global ozone deposition. Deposition parameterisation influences the predicted atmospheric ozone mixing ratios, especially in the Southern Hemisphere. For the latitudes 45-70° S, the two-layer scheme improves the prediction of ozone observed at an altitude of 1 km by 7 % and that within the altitude range 1-6 km by 5 % compared to the default scheme.
JPEG XS-based frame buffer compression inside HEVC for power-aware video compression
NASA Astrophysics Data System (ADS)
Willème, Alexandre; Descampe, Antonin; Rouvroy, Gaël.; Pellegrin, Pascal; Macq, Benoit
2017-09-01
With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec's total consumption. This paper describes a solution to significantly decrease the FB's bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB's bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.
Mishra, Raghavendra; Barnwal, Amit Kumar
2015-05-01
The Telecare medical information system (TMIS) presents effective healthcare delivery services by employing information and communication technologies. The emerging privacy and security are always a matter of great concern in TMIS. Recently, Chen at al. presented a password based authentication schemes to address the privacy and security. Later on, it is proved insecure against various active and passive attacks. To erase the drawbacks of Chen et al.'s anonymous authentication scheme, several password based authentication schemes have been proposed using public key cryptosystem. However, most of them do not present pre-smart card authentication which leads to inefficient login and password change phases. To present an authentication scheme with pre-smart card authentication, we present an improved anonymous smart card based authentication scheme for TMIS. The proposed scheme protects user anonymity and satisfies all the desirable security attributes. Moreover, the proposed scheme presents efficient login and password change phases where incorrect input can be quickly detected and a user can freely change his password without server assistance. Moreover, we demonstrate the validity of the proposed scheme by utilizing the widely-accepted BAN (Burrows, Abadi, and Needham) logic. The proposed scheme is also comparable in terms of computational overheads with relevant schemes.
Quantum blind dual-signature scheme without arbitrator
NASA Astrophysics Data System (ADS)
Li, Wei; Shi, Ronghua; Huang, Dazu; Shi, Jinjing; Guo, Ying
2016-03-01
Motivated by the elegant features of a bind signature, we suggest the design of a quantum blind dual-signature scheme with three phases, i.e., initial phase, signing phase and verification phase. Different from conventional schemes, legal messages are signed not only by the blind signatory but also by the sender in the signing phase. It does not rely much on an arbitrator in the verification phase as the previous quantum signature schemes usually do. The security is guaranteed by entanglement in quantum information processing. Security analysis demonstrates that the signature can be neither forged nor disavowed by illegal participants or attacker. It provides a potential application for e-commerce or e-payment systems with the current technology.
NASA Technical Reports Server (NTRS)
Beggs, John H.
2000-01-01
The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been extended to treat lossy dielectric and magnetic materials. This paper examines different methodologies for treatment of the electric loss term in the Linear Bicharacteristic Scheme for computational electromagnetics. Several different treatments of the electric loss term using the LBS are explored and compared on one-dimensional model problems involving reflection from lossy dielectric materials on both uniform and nonuniform grids. Results using these LBS implementations are also compared with the FDTD method for convenience.
Fourier analysis of finite element preconditioned collocation schemes
NASA Technical Reports Server (NTRS)
Deville, Michel O.; Mund, Ernest H.
1990-01-01
The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.
Local matrix learning in clustering and applications for manifold visualization.
Arnonkijpanich, Banchar; Hasenfuss, Alexander; Hammer, Barbara
2010-05-01
Electronic data sets are increasing rapidly with respect to both, size of the data sets and data resolution, i.e. dimensionality, such that adequate data inspection and data visualization have become central issues of data mining. In this article, we present an extension of classical clustering schemes by local matrix adaptation, which allows a better representation of data by means of clusters with an arbitrary spherical shape. Unlike previous proposals, the method is derived from a global cost function. The focus of this article is to demonstrate the applicability of this matrix clustering scheme to low-dimensional data embedding for data inspection. The proposed method is based on matrix learning for neural gas and manifold charting. This provides an explicit mapping of a given high-dimensional data space to low dimensionality. We demonstrate the usefulness of this method for data inspection and manifold visualization. 2009 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.
1981-01-01
The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.
NASA Astrophysics Data System (ADS)
Melas, Evangelos
2011-07-01
The 3+1 (canonical) decomposition of all geometries admitting two-dimensional space-like surfaces is exhibited as a generalization of a previous work. A proposal, consisting of a specific re-normalization Assumption and an accompanying Requirement, which has been put forward in the 2+1 case is now generalized to 3+1 dimensions. This enables the canonical quantization of these geometries through a generalization of Kuchař's quantization scheme in the case of infinite degrees of freedom. The resulting Wheeler-deWitt equation is based on a re-normalized manifold parameterized by three smooth scalar functionals. The entire space of solutions to this equation is analytically given, a fact that is entirely new to the present case. This is made possible by exploiting the freedom left by the imposition of the Requirement and contained in the third functional.
NASA Astrophysics Data System (ADS)
D'yachkov, A. B.; Gorkunov, A. A.; Labozin, A. V.; Mironov, S. M.; Panchenko, V. Ya.; Firsov, V. A.; Tsvetkov, G. O.
2018-01-01
The use of atomic vapour laser isotope separation (AVLIS) for solving a number of urgent problems (formation of 177Lu radionuclides for medical applications, 63Ni radionuclides for betavoltaic power supplies and 150Nd isotope for searching for neutrinoless double β decay and neutrino mass) is considered. An efficient three-step scheme of photoionisation of neodymium atoms through the 50474-cm-1 autoionising state with radiation wavelengths of the corresponding stages of λ1 = 6289.7 Å, λ2 = 5609.4 Å and λ3 = 5972.1 Å is developed. The average saturation intensity of the autoionising transition is ˜6 W cm-2, a value consistent with the characteristics of the previously developed photoionisation schemes for lutetium and nickel. A compact laser system for the technological AVLIS complex, designed to produce radionuclides and isotopes under laboratory conditions, is developed based on the experimental results.
Hu, Yang; Li, Decai; Shu, Shi; Niu, Xiaodong
2016-02-01
Based on the Darcy-Brinkman-Forchheimer equation, a finite-volume computational model with lattice Boltzmann flux scheme is proposed for incompressible porous media flow in this paper. The fluxes across the cell interface are calculated by reconstructing the local solution of the generalized lattice Boltzmann equation for porous media flow. The time-scaled midpoint integration rule is adopted to discretize the governing equation, which makes the time step become limited by the Courant-Friedricks-Lewy condition. The force term which evaluates the effect of the porous medium is added to the discretized governing equation directly. The numerical simulations of the steady Poiseuille flow, the unsteady Womersley flow, the circular Couette flow, and the lid-driven flow are carried out to verify the present computational model. The obtained results show good agreement with the analytical, finite-difference, and/or previously published solutions.
Lin, Psang Dain
2014-05-10
In a previous paper [Appl. Opt.52, 4151 (2013)], we presented the first- and second-order derivatives of a ray for a flat boundary surface to design prisms. In this paper, that scheme is extended to determine the Jacobian and Hessian matrices of a skew ray as it is reflected/refracted at a spherical boundary surface. The validity of the proposed approach as an analysis and design tool is demonstrated using an axis-symmetrical system for illustration purpose. It is found that these two matrices can provide the search direction used by existing gradient-based schemes to minimize the merit function during the optimization stage of the optical system design process. It is also possible to make the optical system designs more automatic, if the image defects can be extracted from the Jacobian and Hessian matrices of a skew ray.
Security proof of continuous-variable quantum key distribution using three coherent states
NASA Astrophysics Data System (ADS)
Brádler, Kamil; Weedbrook, Christian
2018-02-01
We introduce a ternary quantum key distribution (QKD) protocol and asymptotic security proof based on three coherent states and homodyne detection. Previous work had considered the binary case of two coherent states and here we nontrivially extend this to three. Our motivation is to leverage the practical benefits of both discrete and continuous (Gaussian) encoding schemes creating a best-of-both-worlds approach; namely, the postprocessing of discrete encodings and the hardware benefits of continuous ones. We present a thorough and detailed security proof in the limit of infinite signal states which allows us to lower bound the secret key rate. We calculate this is in the context of collective eavesdropping attacks and reverse reconciliation postprocessing. Finally, we compare the ternary coherent state protocol to other well-known QKD schemes (and fundamental repeaterless limits) in terms of secret key rates and loss.
Unconditional security of a three state quantum key distribution protocol.
Boileau, J-C; Tamaki, K; Batuwantudawe, J; Laflamme, R; Renes, J M
2005-02-04
Quantum key distribution (QKD) protocols are cryptographic techniques with security based only on the laws of quantum mechanics. Two prominent QKD schemes are the Bennett-Brassard 1984 and Bennett 1992 protocols that use four and two quantum states, respectively. In 2000, Phoenix et al. proposed a new family of three-state protocols that offers advantages over the previous schemes. Until now, an error rate threshold for security of the symmetric trine spherical code QKD protocol has been shown only for the trivial intercept-resend eavesdropping strategy. In this Letter, we prove the unconditional security of the trine spherical code QKD protocol, demonstrating its security up to a bit error rate of 9.81%. We also discuss how this proof applies to a version of the trine spherical code QKD protocol where the error rate is evaluated from the number of inconclusive events.
NASA Astrophysics Data System (ADS)
Al-Gburi, A.; Freeman, C. T.; French, M. C.
2018-06-01
This paper uses gap metric analysis to derive robustness and performance margins for feedback linearising controllers. Distinct from previous robustness analysis, it incorporates the case of output unstructured uncertainties, and is shown to yield general stability conditions which can be applied to both stable and unstable plants. It then expands on existing feedback linearising control schemes by introducing a more general robust feedback linearising control design which classifies the system nonlinearity into stable and unstable components and cancels only the unstable plant nonlinearities. This is done in order to preserve the stabilising action of the inherently stabilising nonlinearities. Robustness and performance margins are derived for this control scheme, and are expressed in terms of bounds on the plant nonlinearities and the accuracy of the cancellation of the unstable plant nonlinearity by the controller. Case studies then confirm reduced conservatism compared with standard methods.
An Inverter Packaging Scheme for an Integrated Segmented Traction Drive System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Gui-Jia; Tang, Lixin; Ayers, Curtis William
The standard voltage source inverter (VSI), widely used in electric vehicle/hybrid electric vehicle (EV/HEV) traction drives, requires a bulky dc bus capacitor to absorb the large switching ripple currents and prevent them from shortening the battery s life. The dc bus capacitor presents a significant barrier to meeting inverter cost, volume, and weight requirements for mass production of affordable EVs/HEVs. The large ripple currents become even more problematic for the film capacitors (the capacitor technology of choice for EVs/HEVs) in high temperature environments as their ripple current handling capability decreases rapidly with rising temperatures. It is shown in previous workmore » that segmenting the VSI based traction drive system can significantly decrease the ripple currents and thus the size of the dc bus capacitor. This paper presents an integrated packaging scheme to reduce the system cost of a segmented traction drive.« less
Exploration of tetrahedral structures in silicate cathodes using a motif-network scheme
Zhao, Xin; Wu, Shunqing; Lv, Xiaobao; ...
2015-10-26
Using a motif-network search scheme, we studied the tetrahedral structures of the dilithium/disodium transition metal orthosilicates A 2MSiO 4 with A = Li or Na and M = Mn, Fe or Co. In addition to finding all previously reported structures, we discovered many other different tetrahedral-network-based crystal structures which are highly degenerate in energy. In addition, these structures can be classified into structures with 1D, 2D and 3D M-Si-O frameworks. A clear trend of the structural preference in different systems was revealed and possible indicators that affect the structure stabilities were introduced. For the case of Na systems which havemore » been much less investigated in the literature relative to the Li systems, we predicted their ground state structures and found evidence for the existence of new structural motifs.« less
A gas kinetic scheme for hybrid simulation of partially rarefied flows
NASA Astrophysics Data System (ADS)
Colonia, S.; Steijl, R.; Barakos, G.
2017-06-01
Approaches to predict flow fields that display rarefaction effects incur a cost in computational time and memory considerably higher than methods commonly employed for continuum flows. For this reason, to simulate flow fields where continuum and rarefied regimes coexist, hybrid techniques have been introduced. In the present work, analytically defined gas-kinetic schemes based on the Shakhov and Rykov models for monoatomic and diatomic gas flows, respectively, are proposed and evaluated with the aim to be used in the context of hybrid simulations. This should reduce the region where more expensive methods are needed by extending the validity of the continuum formulation. Moreover, since for high-speed rare¦ed gas flows it is necessary to take into account the nonequilibrium among the internal degrees of freedom, the extension of the approach to employ diatomic gas models including rotational relaxation process is a mandatory first step towards realistic simulations. Compared to previous works of Xu and coworkers, the presented scheme is de¦ned directly on the basis of kinetic models which involve a Prandtl number correction. Moreover, the methods are defined fully analytically instead of making use of Taylor expansion for the evaluation of the required derivatives. The scheme has been tested for various test cases and Mach numbers proving to produce reliable predictions in agreement with other approaches for near-continuum flows. Finally, the performance of the scheme, in terms of memory and computational time, compared to discrete velocity methods makes it a compelling alternative in place of more complex methods for hybrid simulations of weakly rarefied flows.
NASA Astrophysics Data System (ADS)
Zhou, Shiqi
2017-11-01
A new scheme is put forward to determine the wetting temperature (Tw) by utilizing the adaptation of arc-length continuation algorithm to classical density functional theory (DFT) used originally by Frink and Salinger, and its advantages are summarized into four points: (i) the new scheme is applicable whether the wetting occurs near a planar or a non-planar surface, whereas a zero contact angle method is considered only applicable to a perfectly flat solid surface, as demonstrated previously and in this work, and essentially not fit for non-planar surface. (ii) The new scheme is devoid of an uncertainty, which plagues a pre-wetting extrapolation method and originates from an unattainability of the infinitely thick film in the theoretical calculation. (iii) The new scheme can be similarly and easily applied to extreme instances characterized by lower temperatures and/or higher surface attraction force field, which, however, can not be dealt with by the pre-wetting extrapolation method because of the pre-wetting transition being mixed with many layering transitions and the difficulty in differentiating varieties of the surface phase transitions. (iv) The new scheme still works in instance wherein the wetting transition occurs close to the bulk critical temperature; however, this case completely can not be managed by the pre-wetting extrapolation method because near the bulk critical temperature the pre-wetting region is extremely narrow, and no enough pre-wetting data are available for use of the extrapolation procedure.
Intelligent design of permanent magnet synchronous motor based on CBR
NASA Astrophysics Data System (ADS)
Li, Cong; Fan, Beibei
2018-05-01
Aiming at many problems in the design process of Permanent magnet synchronous motor (PMSM), such as the complexity of design process, the over reliance on designers' experience and the lack of accumulation and inheritance of design knowledge, a design method of PMSM Based on CBR is proposed in order to solve those problems. In this paper, case-based reasoning (CBR) methods of cases similarity calculation is proposed for reasoning suitable initial scheme. This method could help designers, by referencing previous design cases, to make a conceptual PMSM solution quickly. The case retain process gives the system self-enrich function which will improve the design ability of the system with the continuous use of the system.
A Robust Blind Quantum Copyright Protection Method for Colored Images Based on Owner's Signature
NASA Astrophysics Data System (ADS)
Heidari, Shahrokh; Gheibi, Reza; Houshmand, Monireh; Nagata, Koji
2017-08-01
Watermarking is the imperceptible embedding of watermark bits into multimedia data in order to use for different applications. Among all its applications, copyright protection is the most prominent usage which conceals information about the owner in the carrier, so as to prohibit others from assertion copyright. This application requires high level of robustness. In this paper, a new blind quantum copyright protection method based on owners's signature in RGB images is proposed. The method utilizes one of the RGB channels as indicator and two remained channels are used for embedding information about the owner. In our contribution the owner's signature is considered as a text. Therefore, in order to embed in colored image as watermark, a new quantum representation of text based on ASCII character set is offered. Experimental results which are analyzed in MATLAB environment, exhibit that the presented scheme shows good performance against attacks and can be used to find out who the real owner is. Finally, the discussed quantum copyright protection method is compared with a related work that our analysis confirm that the presented scheme is more secure and applicable than the previous ones currently found in the literature.
Rossi, Michael R.; Tanaka, Daigo; Shimada, Kenji; Rabin, Yoed
2009-01-01
The current study focuses on experimentally validating a planning scheme based on the so-called bubble-packing method. This study is a part of an ongoing effort to develop computerized planning tools for cryosurgery, where bubble packing has been previously developed as a means to find an initial, uniform distribution of cryoprobes within a given domain; the so-called force-field analogy was then used to move cryoprobes to their optimum layout. However, due to the high quality of the cryoprobes’ distribution, suggested by bubble packing and its low computational cost, it has been argued that a planning scheme based solely on bubble packing may be more clinically relevant. To test this argument, an experimental validation is performed on a simulated cross-section of the prostate, using gelatin solution as a phantom material, proprietary liquid-nitrogen based cryoprobes, and a cryoheater to simulate urethral warming. Experimental results are compared with numerically simulated temperature histories resulting from planning. Results indicate an average disagreement of 0.8 mm in identifying the freezing front location, which is an acceptable level of uncertainty in the context of prostate cryosurgery imaging. PMID:19885373
NASA Astrophysics Data System (ADS)
Martelloni, Gianluca; Bagnoli, Franco; Guarino, Alessio
2017-09-01
We present a three-dimensional model of rain-induced landslides, based on cohesive spherical particles. The rainwater infiltration into the soil follows either the fractional or the fractal diffusion equations. We analytically solve the fractal partial differential equation (PDE) for diffusion with particular boundary conditions to simulate a rainfall event. We developed a numerical integration scheme for the PDE, compared with the analytical solution. We adapt the fractal diffusion equation obtaining the gravimetric water content that we use as input of a triggering scheme based on Mohr-Coulomb limit-equilibrium criterion. This triggering is then complemented by a standard molecular dynamics algorithm, with an interaction force inspired by the Lennard-Jones potential, to update the positions and velocities of particles. We present our results for homogeneous and heterogeneous systems, i.e., systems composed by particles with same or different radius, respectively. Interestingly, in the heterogeneous case, we observe segregation effects due to the different volume of the particles. Finally, we analyze the parameter sensibility both for the triggering and the propagation phases. Our simulations confirm the results of a previous two-dimensional model and therefore the feasible applicability to real cases.
Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy
NASA Astrophysics Data System (ADS)
Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente
2017-02-01
We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes.
Yang, Hui; Zhang, Jie; Zhao, Yongli; Ji, Yuefeng; Li, Hui; Lin, Yi; Li, Gang; Han, Jianrui; Lee, Young; Ma, Teng
2014-07-28
Data center interconnection with elastic optical networks is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. We previously implemented enhanced software defined networking over elastic optical network for data center application [Opt. Express 21, 26990 (2013)]. On the basis of it, this study extends to consider the time-aware data center service scheduling with elastic service time and service bandwidth according to the various time sensitivity requirements. A novel time-aware enhanced software defined networking (TeSDN) architecture for elastic data center optical interconnection has been proposed in this paper, by introducing a time-aware resources scheduling (TaRS) scheme. The TeSDN can accommodate the data center services with required QoS considering the time dimensionality, and enhance cross stratum optimization of application and elastic optical network stratums resources based on spectrum elasticity, application elasticity and time elasticity. The overall feasibility and efficiency of the proposed architecture is experimentally verified on our OpenFlow-based testbed. The performance of TaRS scheme under heavy traffic load scenario is also quantitatively evaluated based on TeSDN architecture in terms of blocking probability and resource occupation rate.
Veerasamy, Anitha; Madane, Srinivasa Rao; Sivakumar, K; Sivaraman, Audithan
2016-01-01
Growing attractiveness of Mobile Ad Hoc Networks (MANETs), its features, and usage has led to the launching of threats and attacks to bring negative consequences in the society. The typical features of MANETs, especially with dynamic topology and open wireless medium, may leave MANETs vulnerable. Trust management using uncertain reasoning scheme has previously attempted to solve this problem. However, it produces additional overhead while securing the network. Hence, a Location and Trust-based secure communication scheme (L&TS) is proposed to overcome this limitation. Since the design securing requires more than two data algorithms, the cost of the system goes up. Another mechanism proposed in this paper, Angle and Context Free Grammar (ACFG) based precarious node elimination and secure communication in MANETs, intends to secure data transmission and detect precarious nodes in a MANET at a comparatively lower cost. The Elliptic Curve function is used to isolate a malicious node, thereby incorporating secure data transfer. Simulation results show that the dynamic estimation of the metrics improves throughput by 26% in L&TS when compared to the TMUR. ACFG achieves 33% and 51% throughput increase when compared to L&TS and TMUR mechanisms, respectively.
Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy
Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente
2017-01-01
We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes. PMID:28233829
Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi; Wang, Chun-Cheng
2015-11-01
To protect patient privacy and ensure authorized access to remote medical services, many remote user authentication schemes for the integrated electronic patient record (EPR) information system have been proposed in the literature. In a recent paper, Das proposed a hash based remote user authentication scheme using passwords and smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various passive and active attacks. However, in this paper, we found that Das's authentication scheme is still vulnerable to modification and user duplication attacks. Thereafter we propose a secure and efficient authentication scheme for the integrated EPR information system based on lightweight hash function and bitwise exclusive-or (XOR) operations. The security proof and performance analysis show our new scheme is well-suited to adoption in remote medical healthcare services.
A Practical and Secure Coercion-Resistant Scheme for Internet Voting
NASA Astrophysics Data System (ADS)
Araújo, Roberto; Foulle, Sébastien; Traoré, Jacques
Juels, Catalano, and Jakobsson (JCJ) proposed at WPES 2005 the first voting scheme that considers real-world threats and that is more realistic for Internet elections. Their scheme, though, has a quadratic work factor and thereby is not efficient for large scale elections. Based on the work of JCJ, Smith proposed an efficient scheme that has a linear work factor. In this paper we first show that Smith's scheme is insecure. Then we present a new coercion-resistant election scheme with a linear work factor that overcomes the flaw of Smith's proposal. Our solution is based on the group signature scheme of Camenisch and Lysyanskaya (Crypto 2004).
End-to-End QoS for Differentiated Services and ATM Internetworking
NASA Technical Reports Server (NTRS)
Su, Hongjun; Atiquzzaman, Mohammed
2001-01-01
The Internet was initially design for non real-time data communications and hence does not provide any Quality of Service (QoS). The next generation Internet will be characterized by high speed and QoS guarantee. The aim of this paper is to develop a prioritized early packet discard (PEPD) scheme for ATM switches to provide service differentiation and QoS guarantee to end applications running over next generation Internet. The proposed PEPD scheme differs from previous schemes by taking into account the priority of packets generated from different application. We develop a Markov chain model for the proposed scheme and verify the model with simulation. Numerical results show that the results from the model and computer simulation are in close agreement. Our PEPD scheme provides service differentiation to the end-to-end applications.
Hyperentanglement purification using imperfect spatial entanglement.
Wang, Tie-Jun; Mi, Si-Chen; Wang, Chuan
2017-02-06
As the interaction between the photons and the environment which will make the entangled photon pairs in less entangled states or even in mixed states, the security and the efficiency of quantum communication will decrease. We present an efficient hyperentanglement purification protocol that distills nonlocal high-fidelity hyper-entangled Bell states in both polarization and spatial-mode degrees of freedom from ensembles of two-photon system in mixed states using linear optics. Here, we consider the influence of the photon loss in the channel which generally is ignored in the conventional entanglement purification and hyperentanglement purification (HEP) schemes. Compared with previous HEP schemes, our HEP scheme decreases the requirement for nonlocal resources by employing high-dimensional mode-check measurement, and leads to a higher fidelity, especially in the range where the conventional HEP schemes become invalid but our scheme still can work.
We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1) Lake Superior tributaries and 2) watersheds of riverine coastal wetlands...
We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1)Lake Superior tributaries and 2) watersheds of riverine coastal wetlands ...
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2017-03-01
Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.
Poulsen, Signe; Jørgensen, Michael Søgaard
2011-09-01
The aim of this article is to analyse the social shaping of worksite food interventions at two Danish worksites. The overall aims are to contribute first, to the theoretical frameworks for the planning and analysis of food and health interventions at worksites and second, to a foodscape approach to worksite food interventions. The article is based on a case study of the design of a canteen takeaway (CTA) scheme for employees at two Danish hospitals. This was carried out as part of a project to investigate the shaping and impact of schemes that offer employees meals to buy, to take home or to eat at the worksite during irregular working hours. Data collection was carried out through semi-structured interviews with stakeholders within the two change processes. Two focus group interviews were also carried out at one hospital and results from a user survey carried out by other researchers at the other hospital were included. Theoretically, the study was based on the social constitution approach to change processes at worksites and a co-evolution approach to problem-solution complexes as part of change processes. Both interventions were initiated because of the need to improve the food supply for the evening shift and the work-life balance. The shaping of the schemes at the two hospitals became rather different change processes due to the local organizational processes shaped by previously developed norms and values. At one hospital the change process challenged norms and values about food culture and challenged ideas in the canteen kitchen about working hours. At the other hospital, the change was more of a learning process that aimed at finding the best way to offer a CTA scheme. Worksite health promotion practitioners should be aware that the intervention itself is an object of negotiation between different stakeholders at a worksite based on existing norms and values. The social contextual model and the setting approach to worksite health interventions lack reflections about how such norms and values might influence the shaping of the intervention. It is recommended that future planning and analyses of worksite health promotion interventions apply a combination of the social constitution approach to worksites and an integrated food supply and demand perspective based on analyses of the co-evolution of problem-solution complexes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsen, Jeppe, E-mail: jeppe@chem.au.dk
2014-07-21
A novel algorithm is introduced for the transformation of wave functions between the bases of Slater determinants (SD) and configuration state functions (CSF) in the genealogical coupling scheme. By modifying the expansion coefficients as each electron is spin-coupled, rather than performing a single many-electron transformation, the large transformation matrix that plagues previous approaches is avoided and the required number of operations is drastically reduced. As an example of the efficiency of the algorithm, the transformation for a configuration with 30 unpaired electrons and singlet spin is discussed. For this case, the 10 × 10{sup 6} coefficients in the CSF basismore » is obtained from the 150 × 10{sup 6} coefficients in the SD basis in 1 min, which should be compared with the seven years that the previously employed method is estimated to require.« less
An effective and secure key-management scheme for hierarchical access control in E-medicine system.
Odelu, Vanga; Das, Ashok Kumar; Goswami, Adrijit
2013-04-01
Recently several hierarchical access control schemes are proposed in the literature to provide security of e-medicine systems. However, most of them are either insecure against 'man-in-the-middle attack' or they require high storage and computational overheads. Wu and Chen proposed a key management method to solve dynamic access control problems in a user hierarchy based on hybrid cryptosystem. Though their scheme improves computational efficiency over Nikooghadam et al.'s approach, it suffers from large storage space for public parameters in public domain and computational inefficiency due to costly elliptic curve point multiplication. Recently, Nikooghadam and Zakerolhosseini showed that Wu-Chen's scheme is vulnerable to man-in-the-middle attack. In order to remedy this security weakness in Wu-Chen's scheme, they proposed a secure scheme which is again based on ECC (elliptic curve cryptography) and efficient one-way hash function. However, their scheme incurs huge computational cost for providing verification of public information in the public domain as their scheme uses ECC digital signature which is costly when compared to symmetric-key cryptosystem. In this paper, we propose an effective access control scheme in user hierarchy which is only based on symmetric-key cryptosystem and efficient one-way hash function. We show that our scheme reduces significantly the storage space for both public and private domains, and computational complexity when compared to Wu-Chen's scheme, Nikooghadam-Zakerolhosseini's scheme, and other related schemes. Through the informal and formal security analysis, we further show that our scheme is secure against different attacks and also man-in-the-middle attack. Moreover, dynamic access control problems in our scheme are also solved efficiently compared to other related schemes, making our scheme is much suitable for practical applications of e-medicine systems.
High-Order Central WENO Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new third- and fifth-order Godunov-type central schemes for approximating solutions of the Hamilton-Jacobi (HJ) equation in an arbitrary number of space dimensions. These are the first central schemes for approximating solutions of the HJ equations with an order of accuracy that is greater than two. In two space dimensions we present two versions for the third-order scheme: one scheme that is based on a genuinely two-dimensional Central WENO reconstruction, and another scheme that is based on a simpler dimension-by-dimension reconstruction. The simpler dimension-by-dimension variant is then extended to a multi-dimensional fifth-order scheme. Our numerical examples in one, two and three space dimensions verify the expected order of accuracy of the schemes.
NASA Astrophysics Data System (ADS)
Lazri, Mourad; Ameur, Soltane
2016-09-01
In this paper, an algorithm based on the probability of rainfall intensities classification for rainfall estimation from Meteosat Second Generation/Spinning Enhanced Visible and Infrared Imager (MSG-SEVIRI) has been developed. The classification scheme uses various spectral parameters of SEVIRI that provide information about cloud top temperature and optical and microphysical cloud properties. The presented method is developed and trained for the north of Algeria. The calibration of the method is carried out using as a reference rain classification fields derived from radar for rainy season from November 2006 to March 2007. Rainfall rates are assigned to rain areas previously identified and classified according to the precipitation formation processes. The comparisons between satellite-derived precipitation estimates and validation data show that the developed scheme performs reasonably well. Indeed, the correlation coefficient presents a significant level (r:0.87). The values of POD, POFD and FAR are 80%, 13% and 25%, respectively. Also, for a rainfall estimation of about 614 mm, the RMSD, Bias, MAD and PD indicate 102.06(mm), 2.18(mm), 68.07(mm) and 12.58, respectively.
Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE
Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2013-01-01
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1993-01-01
The convergence characteristics of various approximate factorizations for the 3D Euler and Navier-Stokes equations are examined using the von-Neumann stability analysis method. Three upwind-difference based factorizations and several central-difference based factorizations are considered for the Euler equations. In the upwind factorizations both the flux-vector splitting methods of Steger and Warming and van Leer are considered. Analysis of the Navier-Stokes equations is performed only on the Beam and Warming central-difference scheme. The range of CFL numbers over which each factorization is stable is presented for one-, two-, and three-dimensional flow. Also presented for each factorization is the CFL number at which the maximum eigenvalue is minimized, for all Fourier components, as well as for the high frequency range only. The latter is useful for predicting the effectiveness of multigrid procedures with these schemes as smoothers. Further, local mode analysis is performed to test the suitability of using a uniform flow field in the stability analysis. Some inconsistencies in the results from previous analyses are resolved.
Verification in Referral-Based Crowdsourcing
Naroditskiy, Victor; Rahwan, Iyad; Cebrian, Manuel; Jennings, Nicholas R.
2012-01-01
Online social networks offer unprecedented potential for rallying a large number of people to accomplish a given task. Here we focus on information gathering tasks where rare information is sought through “referral-based crowdsourcing”: the information request is propagated recursively through invitations among members of a social network. Whereas previous work analyzed incentives for the referral process in a setting with only correct reports, misreporting is known to be both pervasive in crowdsourcing applications, and difficult/costly to filter out. A motivating example for our work is the DARPA Red Balloon Challenge where the level of misreporting was very high. In order to undertake a formal study of verification, we introduce a model where agents can exert costly effort to perform verification and false reports can be penalized. This is the first model of verification and it provides many directions for future research, which we point out. Our main theoretical result is the compensation scheme that minimizes the cost of retrieving the correct answer. Notably, this optimal compensation scheme coincides with the winning strategy of the Red Balloon Challenge. PMID:23071530
Numerical approach of collision avoidance and optimal control on robotic manipulators
NASA Technical Reports Server (NTRS)
Wang, Jyhshing Jack
1990-01-01
Collision-free optimal motion and trajectory planning for robotic manipulators are solved by a method of sequential gradient restoration algorithm. Numerical examples of a two degree-of-freedom (DOF) robotic manipulator are demonstrated to show the excellence of the optimization technique and obstacle avoidance scheme. The obstacle is put on the midway, or even further inward on purpose, of the previous no-obstacle optimal trajectory. For the minimum-time purpose, the trajectory grazes by the obstacle and the minimum-time motion successfully avoids the obstacle. The minimum-time is longer for the obstacle avoidance cases than the one without obstacle. The obstacle avoidance scheme can deal with multiple obstacles in any ellipsoid forms by using artificial potential fields as penalty functions via distance functions. The method is promising in solving collision-free optimal control problems for robotics and can be applied to any DOF robotic manipulators with any performance indices and mobile robots as well. Since this method generates optimum solution based on Pontryagin Extremum Principle, rather than based on assumptions, the results provide a benchmark against which any optimization techniques can be measured.
Numerical 3+1 General Relativistic Magnetohydrodynamics: A Local Characteristic Approach
NASA Astrophysics Data System (ADS)
Antón, Luis; Zanotti, Olindo; Miralles, Juan A.; Martí, José M.; Ibáñez, José M.; Font, José A.; Pons, José A.
2006-01-01
We present a general procedure to solve numerically the general relativistic magnetohydrodynamics (GRMHD) equations within the framework of the 3+1 formalism. The work reported here extends our previous investigation in general relativistic hydrodynamics (Banyuls et al. 1997) where magnetic fields were not considered. The GRMHD equations are written in conservative form to exploit their hyperbolic character in the solution procedure. All theoretical ingredients necessary to build up high-resolution shock-capturing schemes based on the solution of local Riemann problems (i.e., Godunov-type schemes) are described. In particular, we use a renormalized set of regular eigenvectors of the flux Jacobians of the relativistic MHD equations. In addition, the paper describes a procedure based on the equivalence principle of general relativity that allows the use of Riemann solvers designed for special relativistic MHD in GRMHD. Our formulation and numerical methodology are assessed by performing various test simulations recently considered by different authors. These include magnetized shock tubes, spherical accretion onto a Schwarzschild black hole, equatorial accretion onto a Kerr black hole, and magnetized thick disks accreting onto a black hole and subject to the magnetorotational instability.
Robust Stability of Scaled-Four-Channel Teleoperation with Internet Time-Varying Delays
Delgado, Emma; Barreiro, Antonio; Falcón, Pablo; Díaz-Cacho, Miguel
2016-01-01
We describe the application of a generic stability framework for a teleoperation system under time-varying delay conditions, as addressed in a previous work, to a scaled-four-channel (γ-4C) control scheme. Described is how varying delays are dealt with by means of dynamic encapsulation, giving rise to mu-test conditions for robust stability and offering an appealing frequency technique to deal with the stability robustness of the architecture. We discuss ideal transparency problems and we adapt classical solutions so that controllers are proper, without single or double differentiators, and thus avoid the negative effects of noise. The control scheme was fine-tuned and tested for complete stability to zero of the whole state, while seeking a practical solution to the trade-off between stability and transparency in the Internet-based teleoperation. These ideas were tested on an Internet-based application with two Omni devices at remote laboratory locations via simulations and real remote experiments that achieved robust stability, while performing well in terms of position synchronization and force transparency. PMID:27128914
NASA Technical Reports Server (NTRS)
Margulis, L.; Guerrero, R.
1991-01-01
How should the world's living organisms be classified? Into how many kingdoms should they be grouped? Scientists have been grappling with these questions since the time of Aristotle, drawing on a broad base of biological characteristics for clues. The fossil record, visible traits of living organisms and, more recently, results from cell biology have all shaped theories of biological classification. But last year a new and controversial concept emerged: a classification of life based solely on molecular traits. The focal point of the controversy is a tree of life, or "phylogeny", devised by Carl Woese of the University of Illinois, Otto Kandler of the University of Munich and Mark Wheelis of the University of California. The tree is unusual because, unlike all previous schemes, it is constructed solely from biochemical data such as DNA sequences rather than a range of different organism characteristics. But that is not all. The scheme also challenges the idea that life on Earth is best divided into five kingdoms, with the main split being between bacteria and all other organisms. Woese and his colleagues create three main groupings by dividing the bacteria in two and unifying all other organisms.
Müller, Dirk K; Pampel, André; Möller, Harald E
2013-05-01
Quantification of magnetization-transfer (MT) experiments are typically based on the assumption of the binary spin-bath model. This model allows for the extraction of up to six parameters (relative pool sizes, relaxation times, and exchange rate constants) for the characterization of macromolecules, which are coupled via exchange processes to the water in tissues. Here, an approach is presented for estimating MT parameters acquired with arbitrary saturation schemes and imaging pulse sequences. It uses matrix algebra to solve the Bloch-McConnell equations without unwarranted simplifications, such as assuming steady-state conditions for pulsed saturation schemes or neglecting imaging pulses. The algorithm achieves sufficient efficiency for voxel-by-voxel MT parameter estimations by using a polynomial interpolation technique. Simulations, as well as experiments in agar gels with continuous-wave and pulsed MT preparation, were performed for validation and for assessing approximations in previous modeling approaches. In vivo experiments in the normal human brain yielded results that were consistent with published data. Copyright © 2013 Elsevier Inc. All rights reserved.
A Quantum Proxy Signature Scheme Based on Genuine Five-qubit Entangled State
NASA Astrophysics Data System (ADS)
Cao, Hai-Jing; Huang, Jun; Yu, Yao-Feng; Jiang, Xiu-Li
2014-09-01
In this paper a very efficient and secure proxy signature scheme is proposed. It is based on controlled quantum teleportation. Genuine five-qubit entangled state functions as quantum channel. The scheme uses the physical characteristics of quantum mechanics to implement delegation, signature and verification. Quantum key distribution and one-time pad are adopted in our scheme, which could guarantee not only the unconditional security of the scheme but also the anonymity of the messages owner.
Das, Ashok Kumar; Bruhadeshwar, Bezawada
2013-10-01
Recently Lee and Liu proposed an efficient password based authentication and key agreement scheme using smart card for the telecare medicine information system [J. Med. Syst. (2013) 37:9933]. In this paper, we show that though their scheme is efficient, their scheme still has two security weaknesses such as (1) it has design flaws in authentication phase and (2) it has design flaws in password change phase. In order to withstand these flaws found in Lee-Liu's scheme, we propose an improvement of their scheme. Our improved scheme keeps also the original merits of Lee-Liu's scheme. We show that our scheme is efficient as compared to Lee-Liu's scheme. Further, through the security analysis, we show that our scheme is secure against possible known attacks. In addition, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that our scheme is secure against passive and active attacks.
A structure adapted multipole method for electrostatic interactions in protein dynamics
NASA Astrophysics Data System (ADS)
Niedermeier, Christoph; Tavan, Paul
1994-07-01
We present an algorithm for rapid approximate evaluation of electrostatic interactions in molecular dynamics simulations of proteins. Traditional algorithms require computational work of the order O(N2) for a system of N particles. Truncation methods which try to avoid that effort entail untolerably large errors in forces, energies and other observables. Hierarchical multipole expansion algorithms, which can account for the electrostatics to numerical accuracy, scale with O(N log N) or even with O(N) if they become augmented by a sophisticated scheme for summing up forces. To further reduce the computational effort we propose an algorithm that also uses a hierarchical multipole scheme but considers only the first two multipole moments (i.e., charges and dipoles). Our strategy is based on the consideration that numerical accuracy may not be necessary to reproduce protein dynamics with sufficient correctness. As opposed to previous methods, our scheme for hierarchical decomposition is adjusted to structural and dynamical features of the particular protein considered rather than chosen rigidly as a cubic grid. As compared to truncation methods we manage to reduce errors in the computation of electrostatic forces by a factor of 10 with only marginal additional effort.
Design of Provider-Provisioned Website Protection Scheme against Malware Distribution
NASA Astrophysics Data System (ADS)
Yagi, Takeshi; Tanimoto, Naoto; Hariu, Takeo; Itoh, Mitsutaka
Vulnerabilities in web applications expose computer networks to security threats, and many websites are used by attackers as hopping sites to attack other websites and user terminals. These incidents prevent service providers from constructing secure networking environments. To protect websites from attacks exploiting vulnerabilities in web applications, service providers use web application firewalls (WAFs). WAFs filter accesses from attackers by using signatures, which are generated based on the exploit codes of previous attacks. However, WAFs cannot filter unknown attacks because the signatures cannot reflect new types of attacks. In service provider environments, the number of exploit codes has recently increased rapidly because of the spread of vulnerable web applications that have been developed through cloud computing. Thus, generating signatures for all exploit codes is difficult. To solve these problems, our proposed scheme detects and filters malware downloads that are sent from websites which have already received exploit codes. In addition, to collect information for detecting malware downloads, web honeypots, which automatically extract the communication records of exploit codes, are used. According to the results of experiments using a prototype, our scheme can filter attacks automatically so that service providers can provide secure and cost-effective network environments.
The Milk in Schools Scheme, 1934-45: "Nationalization" and Resistance
ERIC Educational Resources Information Center
Atkins, Peter
2005-01-01
In October 1934 the National Government took over what had previously been a commercial initiative to encourage milk-drinking in schools. By the outbreak of war the Milk in Schools Scheme had reached 87 per cent of elementary schools in England and Wales and 56 per cent of pupils were drinking one-third of a pint daily. This paper investigates the…
Latif, Rabia; Abbas, Haider; Latif, Seemab; Masood, Ashraf
2016-07-01
Security and privacy are the first and foremost concerns that should be given special attention when dealing with Wireless Body Area Networks (WBANs). As WBAN sensors operate in an unattended environment and carry critical patient health information, Distributed Denial of Service (DDoS) attack is one of the major attacks in WBAN environment that not only exhausts the available resources but also influence the reliability of information being transmitted. This research work is an extension of our previous work in which a machine learning based attack detection algorithm is proposed to detect DDoS attack in WBAN environment. However, in order to avoid complexity, no consideration was given to the traceback mechanism. During traceback, the challenge lies in reconstructing the attack path leading to identify the attack source. Among existing traceback techniques, Probabilistic Packet Marking (PPM) approach is the most commonly used technique in conventional IP- based networks. However, since marking probability assignment has significant effect on both the convergence time and performance of a scheme, it is not directly applicable in WBAN environment due to high convergence time and overhead on intermediate nodes. Therefore, in this paper we have proposed a new scheme called Efficient Traceback Technique (ETT) based on Dynamic Probability Packet Marking (DPPM) approach and uses MAC header in place of IP header. Instead of using fixed marking probability, the proposed scheme uses variable marking probability based on the number of hops travelled by a packet to reach the target node. Finally, path reconstruction algorithms are proposed to traceback an attacker. Evaluation and simulation results indicate that the proposed solution outperforms fixed PPM in terms of convergence time and computational overhead on nodes.
An improved scheme for Flip-OFDM based on Hartley transform in short-range IM/DD systems.
Zhou, Ji; Qiao, Yaojun; Cai, Zhuo; Ji, Yuefeng
2014-08-25
In this paper, an improved Flip-OFDM scheme is proposed for IM/DD optical systems, where the modulation/demodulation processing takes advantage of the fast Hartley transform (FHT) algorithm. We realize the improved scheme in one symbol period while conventional Flip-OFDM scheme based on fast Fourier transform (FFT) in two consecutive symbol periods. So the complexity of many operations in improved scheme is half of that in conventional scheme, such as CP operation, polarity inversion and symbol delay. Compared to FFT with complex input constellation, the complexity of FHT with real input constellation is halved. The transmission experiment over 50-km SSMF has been realized to verify the feasibility of improved scheme. In conclusion, the improved scheme has the same BER performance with conventional scheme, but great superiority on complexity.
Chain-Based Communication in Cylindrical Underwater Wireless Sensor Networks
Javaid, Nadeem; Jafri, Mohsin Raza; Khan, Zahoor Ali; Alrajeh, Nabil; Imran, Muhammad; Vasilakos, Athanasios
2015-01-01
Appropriate network design is very significant for Underwater Wireless Sensor Networks (UWSNs). Application-oriented UWSNs are planned to achieve certain objectives. Therefore, there is always a demand for efficient data routing schemes, which can fulfill certain requirements of application-oriented UWSNs. These networks can be of any shape, i.e., rectangular, cylindrical or square. In this paper, we propose chain-based routing schemes for application-oriented cylindrical networks and also formulate mathematical models to find a global optimum path for data transmission. In the first scheme, we devise four interconnected chains of sensor nodes to perform data communication. In the second scheme, we propose routing scheme in which two chains of sensor nodes are interconnected, whereas in third scheme single-chain based routing is done in cylindrical networks. After finding local optimum paths in separate chains, we find global optimum paths through their interconnection. Moreover, we develop a computational model for the analysis of end-to-end delay. We compare the performance of the above three proposed schemes with that of Power Efficient Gathering System in Sensor Information Systems (PEGASIS) and Congestion adjusted PEGASIS (C-PEGASIS). Simulation results show that our proposed 4-chain based scheme performs better than the other selected schemes in terms of network lifetime, end-to-end delay, path loss, transmission loss, and packet sending rate. PMID:25658394
An exact stiffness theory for unidirectional xFRP composites
NASA Astrophysics Data System (ADS)
Klasztorny, M.; Konderla, P.; Piekarski, R.
2009-01-01
UD xFRP composites, i.e., isotropic plastics reinforced with long transversely isotropic fibres packed unidirectionally according to the hexagonal scheme are considered. The constituent materials are geometrically and physically linear. The previous formulations of the exact stiffness theory of such composites are revised, and the theory is developed further based on selected boundary-value problems of elasticity theory. The numerical examples presented are focussed on testing the theory with account of previous variants of this theory and experimental values of the effective elastic constants. The authors have pointed out that the exact stiffness theory of UD xFRP composites, with the modifications proposed in our study, will be useful in the engineering practice and in solving the current problems of the mechanics of composite materials.
Public-key quantum digital signature scheme with one-time pad private-key
NASA Astrophysics Data System (ADS)
Chen, Feng-Lin; Liu, Wan-Fang; Chen, Su-Gen; Wang, Zhi-Hua
2018-01-01
A quantum digital signature scheme is firstly proposed based on public-key quantum cryptosystem. In the scheme, the verification public-key is derived from the signer's identity information (such as e-mail) on the foundation of identity-based encryption, and the signature private-key is generated by one-time pad (OTP) protocol. The public-key and private-key pair belongs to classical bits, but the signature cipher belongs to quantum qubits. After the signer announces the public-key and generates the final quantum signature, each verifier can verify publicly whether the signature is valid or not with the public-key and quantum digital digest. Analysis results show that the proposed scheme satisfies non-repudiation and unforgeability. Information-theoretic security of the scheme is ensured by quantum indistinguishability mechanics and OTP protocol. Based on the public-key cryptosystem, the proposed scheme is easier to be realized compared with other quantum signature schemes under current technical conditions.
Gas-Kinetic Theory Based Flux Splitting Method for Ideal Magnetohydrodynamics
NASA Technical Reports Server (NTRS)
Xu, Kun
1998-01-01
A gas-kinetic solver is developed for the ideal magnetohydrodynamics (MHD) equations. The new scheme is based on the direct splitting of the flux function of the MHD equations with the inclusion of "particle" collisions in the transport process. Consequently, the artificial dissipation in the new scheme is much reduced in comparison with the MHD Flux Vector Splitting Scheme. At the same time, the new scheme is compared with the well-developed Roe-type MHD solver. It is concluded that the kinetic MHD scheme is more robust and efficient than the Roe- type method, and the accuracy is competitive. In this paper the general principle of splitting the macroscopic flux function based on the gas-kinetic theory is presented. The flux construction strategy may shed some light on the possible modification of AUSM- and CUSP-type schemes for the compressible Euler equations, as well as to the development of new schemes for a non-strictly hyperbolic system.
Universal block diagram based modeling and simulation schemes for fractional-order control systems.
Bai, Lu; Xue, Dingyü
2017-05-08
Universal block diagram based schemes are proposed for modeling and simulating the fractional-order control systems in this paper. A fractional operator block in Simulink is designed to evaluate the fractional-order derivative and integral. Based on the block, the fractional-order control systems with zero initial conditions can be modeled conveniently. For modeling the system with nonzero initial conditions, the auxiliary signal is constructed in the compensation scheme. Since the compensation scheme is very complicated, therefore the integrator chain scheme is further proposed to simplify the modeling procedures. The accuracy and effectiveness of the schemes are assessed in the examples, the computation results testify the block diagram scheme is efficient for all Caputo fractional-order ordinary differential equations (FODEs) of any complexity, including the implicit Caputo FODEs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Secure Communications in CIoT Networks with a Wireless Energy Harvesting Untrusted Relay
Hu, Hequn; Liao, Xuewen
2017-01-01
The Internet of Things (IoT) represents a bright prospect that a variety of common appliances can connect to one another, as well as with the rest of the Internet, to vastly improve our lives. Unique communication and security challenges have been brought out by the limited hardware, low-complexity, and severe energy constraints of IoT devices. In addition, a severe spectrum scarcity problem has also been stimulated by the use of a large number of IoT devices. In this paper, cognitive IoT (CIoT) is considered where an IoT network works as the secondary system using underlay spectrum sharing. A wireless energy harvesting (EH) node is used as a relay to improve the coverage of an IoT device. However, the relay could be a potential eavesdropper to intercept the IoT device’s messages. This paper considers the problem of secure communication between the IoT device (e.g., sensor) and a destination (e.g., controller) via the wireless EH untrusted relay. Since the destination can be equipped with adequate energy supply, secure schemes based on destination-aided jamming are proposed based on power splitting (PS) and time splitting (TS) policies, called intuitive secure schemes based on PS (Int-PS), precoded secure scheme based on PS (Pre-PS), intuitive secure scheme based on TS (Int-TS) and precoded secure scheme based on TS (Pre-TS), respectively. The secure performances of the proposed schemes are evaluated through the metric of probability of successfully secure transmission (PSST), which represents the probability that the interference constraint of the primary user is satisfied and the secrecy rate is positive. PSST is analyzed for the proposed secure schemes, and the closed form expressions of PSST for Pre-PS and Pre-TS are derived and validated through simulation results. Numerical results show that the precoded secure schemes have better PSST than the intuitive secure schemes under similar power consumption. When the secure schemes based on PS and TS polices have similar PSST, the average transmit power consumption of the secure scheme based on TS is lower. The influences of power splitting and time slitting ratios are also discussed through simulations. PMID:28869540
High-speed detection of DNA translocation in nanopipettes.
Fraccari, Raquel L; Ciccarella, Pietro; Bahrami, Azadeh; Carminati, Marco; Ferrari, Giorgio; Albrecht, Tim
2016-04-14
We present a high-speed electrical detection scheme based on a custom-designed CMOS amplifier which allows the analysis of DNA translocation in glass nanopipettes on a microsecond timescale. Translocation of different DNA lengths in KCl electrolyte provides a scaling factor of the DNA translocation time equal to p = 1.22, which is different from values observed previously with nanopipettes in LiCl electrolyte or with nanopores. Based on a theoretical model involving electrophoresis, hydrodynamics and surface friction, we show that the experimentally observed range of p-values may be the result of, or at least be affected by DNA adsorption and friction between the DNA and the substrate surface.
Efficient entanglement distribution over 200 kilometers.
Dynes, J F; Takesue, H; Yuan, Z L; Sharpe, A W; Harada, K; Honjo, T; Kamada, H; Tadanaga, O; Nishida, Y; Asobe, M; Shields, A J
2009-07-06
Here we report the first demonstration of entanglement distribution over a record distance of 200 km which is of sufficient fidelity to realize secure communication. In contrast to previous entanglement distribution schemes, we use detection elements based on practical avalanche photodiodes (APDs) operating in a self-differencing mode. These APDs are low-cost, compact and easy to operate requiring only electrical cooling to achieve high single photon detection efficiency. The self-differencing APDs in combination with a reliable parametric down-conversion source demonstrate that entanglement distribution over ultra-long distances has become both possible and practical. Consequently the outlook is extremely promising for real world entanglement-based communication between distantly separated parties.
Liu, Li; Gong, Yuan; Wu, Yu; Zhao, Tian; Wu, Hui-Juan; Rao, Yun-Jiang
2012-01-01
Fiber-optic interferometric sensors based on graded-index multimode fibers have very high refractive-index sensitivity, as we previously demonstrated. In this paper, spatial-frequency multiplexing of this type of fiber-optic refractive index sensors is investigated. It is estimated that multiplexing of more than 10 such sensors is possible. In the multiplexing scheme, one of the sensors is used to investigate the refractive index and temperature responses. The fast Fourier transform (FFT) of the combined reflective spectra is analyzed. The intensity of the FFT spectra is linearly related with the refractive index and is not sensitive to the temperature.
Efficient and Provable Secure Pairing-Free Security-Mediated Identity-Based Identification Schemes
Chin, Ji-Jian; Tan, Syh-Yuan; Heng, Swee-Huay; Phan, Raphael C.-W.
2014-01-01
Security-mediated cryptography was first introduced by Boneh et al. in 2001. The main motivation behind security-mediated cryptography was the capability to allow instant revocation of a user's secret key by necessitating the cooperation of a security mediator in any given transaction. Subsequently in 2003, Boneh et al. showed how to convert a RSA-based security-mediated encryption scheme from a traditional public key setting to an identity-based one, where certificates would no longer be required. Following these two pioneering papers, other cryptographic primitives that utilize a security-mediated approach began to surface. However, the security-mediated identity-based identification scheme (SM-IBI) was not introduced until Chin et al. in 2013 with a scheme built on bilinear pairings. In this paper, we improve on the efficiency results for SM-IBI schemes by proposing two schemes that are pairing-free and are based on well-studied complexity assumptions: the RSA and discrete logarithm assumptions. PMID:25207333
Efficient and provable secure pairing-free security-mediated identity-based identification schemes.
Chin, Ji-Jian; Tan, Syh-Yuan; Heng, Swee-Huay; Phan, Raphael C-W
2014-01-01
Security-mediated cryptography was first introduced by Boneh et al. in 2001. The main motivation behind security-mediated cryptography was the capability to allow instant revocation of a user's secret key by necessitating the cooperation of a security mediator in any given transaction. Subsequently in 2003, Boneh et al. showed how to convert a RSA-based security-mediated encryption scheme from a traditional public key setting to an identity-based one, where certificates would no longer be required. Following these two pioneering papers, other cryptographic primitives that utilize a security-mediated approach began to surface. However, the security-mediated identity-based identification scheme (SM-IBI) was not introduced until Chin et al. in 2013 with a scheme built on bilinear pairings. In this paper, we improve on the efficiency results for SM-IBI schemes by proposing two schemes that are pairing-free and are based on well-studied complexity assumptions: the RSA and discrete logarithm assumptions.
Zhang, Liping; Zhu, Shaohui; Tang, Shanyu
2017-03-01
Telecare medicine information systems (TMIS) provide flexible and convenient e-health care. However, the medical records transmitted in TMIS are exposed to unsecured public networks, so TMIS are more vulnerable to various types of security threats and attacks. To provide privacy protection for TMIS, a secure and efficient authenticated key agreement scheme is urgently needed to protect the sensitive medical data. Recently, Mishra et al. proposed a biometrics-based authenticated key agreement scheme for TMIS by using hash function and nonce, they claimed that their scheme could eliminate the security weaknesses of Yan et al.'s scheme and provide dynamic identity protection and user anonymity. In this paper, however, we demonstrate that Mishra et al.'s scheme suffers from replay attacks, man-in-the-middle attacks and fails to provide perfect forward secrecy. To overcome the weaknesses of Mishra et al.'s scheme, we then propose a three-factor authenticated key agreement scheme to enable the patient to enjoy the remote healthcare services via TMIS with privacy protection. The chaotic map-based cryptography is employed in the proposed scheme to achieve a delicate balance of security and performance. Security analysis demonstrates that the proposed scheme resists various attacks and provides several attractive security properties. Performance evaluation shows that the proposed scheme increases efficiency in comparison with other related schemes.
Park, YoHan; Park, YoungHo
2016-12-14
Secure communication is a significant issue in wireless sensor networks. User authentication and key agreement are essential for providing a secure system, especially in user-oriented mobile services. It is also necessary to protect the identity of each individual in wireless environments to avoid personal privacy concerns. Many authentication and key agreement schemes utilize a smart card in addition to a password to support security functionalities. However, these schemes often fail to provide security along with privacy. In 2015, Chang et al. analyzed the security vulnerabilities of previous schemes and presented the two-factor authentication scheme that provided user privacy by using dynamic identities. However, when we cryptanalyzed Chang et al.'s scheme, we found that it does not provide sufficient security for wireless sensor networks and fails to provide accurate password updates. This paper proposes a security-enhanced authentication and key agreement scheme to overcome these security weaknesses using biometric information and an elliptic curve cryptosystem. We analyze the security of the proposed scheme against various attacks and check its viability in the mobile environment.
Park, YoHan; Park, YoungHo
2016-01-01
Secure communication is a significant issue in wireless sensor networks. User authentication and key agreement are essential for providing a secure system, especially in user-oriented mobile services. It is also necessary to protect the identity of each individual in wireless environments to avoid personal privacy concerns. Many authentication and key agreement schemes utilize a smart card in addition to a password to support security functionalities. However, these schemes often fail to provide security along with privacy. In 2015, Chang et al. analyzed the security vulnerabilities of previous schemes and presented the two-factor authentication scheme that provided user privacy by using dynamic identities. However, when we cryptanalyzed Chang et al.’s scheme, we found that it does not provide sufficient security for wireless sensor networks and fails to provide accurate password updates. This paper proposes a security-enhanced authentication and key agreement scheme to overcome these security weaknesses using biometric information and an elliptic curve cryptosystem. We analyze the security of the proposed scheme against various attacks and check its viability in the mobile environment. PMID:27983616
Quantum Walk Schemes for Universal Quantum Computation
NASA Astrophysics Data System (ADS)
Underwood, Michael S.
Random walks are a powerful tool for the efficient implementation of algorithms in classical computation. Their quantum-mechanical analogues, called quantum walks, hold similar promise. Quantum walks provide a model of quantum computation that has recently been shown to be equivalent in power to the standard circuit model. As in the classical case, quantum walks take place on graphs and can undergo discrete or continuous evolution, though quantum evolution is unitary and therefore deterministic until a measurement is made. This thesis considers the usefulness of continuous-time quantum walks to quantum computation from the perspectives of both their fundamental power under various formulations, and their applicability in practical experiments. In one extant scheme, logical gates are effected by scattering processes. The results of an exhaustive search for single-qubit operations in this model are presented. It is shown that the number of distinct operations increases exponentially with the number of vertices in the scattering graph. A catalogue of all graphs on up to nine vertices that implement single-qubit unitaries at a specific set of momenta is included in an appendix. I develop a novel scheme for universal quantum computation called the discontinuous quantum walk, in which a continuous-time quantum walker takes discrete steps of evolution via perfect quantum state transfer through small 'widget' graphs. The discontinuous quantum-walk scheme requires an exponentially sized graph, as do prior discrete and continuous schemes. To eliminate the inefficient vertex resource requirement, a computation scheme based on multiple discontinuous walkers is presented. In this model, n interacting walkers inhabiting a graph with 2n vertices can implement an arbitrary quantum computation on an input of length n, an exponential savings over previous universal quantum walk schemes. This is the first quantum walk scheme that allows for the application of quantum error correction. The many-particle quantum walk can be viewed as a single quantum walk undergoing perfect state transfer on a larger weighted graph, obtained via equitable partitioning. I extend this formalism to non-simple graphs. Examples of the application of equitable partitioning to the analysis of quantum walks and many-particle quantum systems are discussed.
Sutrala, Anil Kumar; Das, Ashok Kumar; Odelu, Vanga; Wazid, Mohammad; Kumari, Saru
2016-10-01
Information and communication and technology (ICT) has changed the entire paradigm of society. ICT facilitates people to use medical services over the Internet, thereby reducing the travel cost, hospitalization cost and time to a greater extent. Recent advancements in Telecare Medicine Information System (TMIS) facilitate users/patients to access medical services over the Internet by gaining health monitoring facilities at home. Amin and Biswas recently proposed a RSA-based user authentication and session key agreement protocol usable for TMIS, which is an improvement over Giri et al.'s RSA-based user authentication scheme for TMIS. In this paper, we show that though Amin-Biswas's scheme considerably improves the security drawbacks of Giri et al.'s scheme, their scheme has security weaknesses as it suffers from attacks such as privileged insider attack, user impersonation attack, replay attack and also offline password guessing attack. A new RSA-based user authentication scheme for TMIS is proposed, which overcomes the security pitfalls of Amin-Biswas's scheme and also preserves user anonymity property. The careful formal security analysis using the two widely accepted Burrows-Abadi-Needham (BAN) logic and the random oracle models is done. Moreover, the informal security analysis of the scheme is also done. These security analyses show the robustness of our new scheme against the various known attacks as well as attacks found in Amin-Biswas's scheme. The simulation of the proposed scheme using the widely accepted Automated Validation of Internet Security Protocols and Applications (AVISPA) tool is also done. We present a new user authentication and session key agreement scheme for TMIS, which fixes the mentioned security pitfalls found in Amin-Biswas's scheme, and we also show that the proposed scheme provides better security than other existing schemes through the rigorous security analysis and verification tool. Furthermore, we present the formal security verification of our scheme using the widely accepted AVISPA tool. High security and extra functionality features allow our proposed scheme to be applicable for telecare medicine information systems which is used for e-health care medical applications. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-07
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
NASA Astrophysics Data System (ADS)
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2016-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456
NASA Technical Reports Server (NTRS)
Lee, H.-W.; Lam, K. S.; Devries, P. L.; George, T. F.
1980-01-01
A new semiclassical decoupling scheme (the trajectory-based decoupling scheme) is introduced in a computational study of vibrational-to-electronic energy transfer for a simple model system that simulates collinear atom-diatom collisions. The probability of energy transfer (P) is calculated quasiclassically using the new scheme as well as quantum mechanically as a function of the atomic electronic-energy separation (lambda), with overall good agreement between the two sets of results. Classical mechanics with the new decoupling scheme is found to be capable of predicting resonance behavior whereas an earlier decoupling scheme (the coordinate-based decoupling scheme) failed. Interference effects are not exhibited in P vs lambda results.
Chung, Yun Won; Kwon, Jae Kyun; Park, Suwon
2014-01-01
One of the key technologies to support mobility of mobile station (MS) in mobile communication systems is location management which consists of location update and paging. In this paper, an improved movement-based location management scheme with two movement thresholds is proposed, considering bursty data traffic characteristics of packet-switched (PS) services. The analytical modeling for location update and paging signaling loads of the proposed scheme is developed thoroughly and the performance of the proposed scheme is compared with that of the conventional scheme. We show that the proposed scheme outperforms the conventional scheme in terms of total signaling load with an appropriate selection of movement thresholds.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
Chung, Youngseok; Choi, Seokjin; Lee, Youngsook; Park, Namje; Won, Dongho
2016-10-07
More security concerns and complicated requirements arise in wireless sensor networks than in wired networks, due to the vulnerability caused by their openness. To address this vulnerability, anonymous authentication is an essential security mechanism for preserving privacy and providing security. Over recent years, various anonymous authentication schemes have been proposed. Most of them reveal both strengths and weaknesses in terms of security and efficiency. Recently, Farash et al. proposed a lightweight anonymous authentication scheme in ubiquitous networks, which remedies the security faults of previous schemes. However, their scheme still suffers from certain weaknesses. In this paper, we prove that Farash et al.'s scheme fails to provide anonymity, authentication, or password replacement. In addition, we propose an enhanced scheme that provides efficiency, as well as anonymity and security. Considering the limited capability of sensor nodes, we utilize only low-cost functions, such as one-way hash functions and bit-wise exclusive-OR operations. The security and lightness of the proposed scheme mean that it can be applied to roaming service in localized domains of wireless sensor networks, to provide anonymous authentication of sensor nodes.
Chung, Youngseok; Choi, Seokjin; Lee, Youngsook; Park, Namje; Won, Dongho
2016-01-01
More security concerns and complicated requirements arise in wireless sensor networks than in wired networks, due to the vulnerability caused by their openness. To address this vulnerability, anonymous authentication is an essential security mechanism for preserving privacy and providing security. Over recent years, various anonymous authentication schemes have been proposed. Most of them reveal both strengths and weaknesses in terms of security and efficiency. Recently, Farash et al. proposed a lightweight anonymous authentication scheme in ubiquitous networks, which remedies the security faults of previous schemes. However, their scheme still suffers from certain weaknesses. In this paper, we prove that Farash et al.’s scheme fails to provide anonymity, authentication, or password replacement. In addition, we propose an enhanced scheme that provides efficiency, as well as anonymity and security. Considering the limited capability of sensor nodes, we utilize only low-cost functions, such as one-way hash functions and bit-wise exclusive-OR operations. The security and lightness of the proposed scheme mean that it can be applied to roaming service in localized domains of wireless sensor networks, to provide anonymous authentication of sensor nodes. PMID:27739417
NASA Astrophysics Data System (ADS)
Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva
2018-02-01
Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.
A Generalized Information Theoretical Model for Quantum Secret Sharing
NASA Astrophysics Data System (ADS)
Bai, Chen-Ming; Li, Zhi-Hui; Xu, Ting-Ting; Li, Yong-Ming
2016-11-01
An information theoretical model for quantum secret sharing was introduced by H. Imai et al. (Quantum Inf. Comput. 5(1), 69-80 2005), which was analyzed by quantum information theory. In this paper, we analyze this information theoretical model using the properties of the quantum access structure. By the analysis we propose a generalized model definition for the quantum secret sharing schemes. In our model, there are more quantum access structures which can be realized by our generalized quantum secret sharing schemes than those of the previous one. In addition, we also analyse two kinds of important quantum access structures to illustrate the existence and rationality for the generalized quantum secret sharing schemes and consider the security of the scheme by simple examples.
NASA Astrophysics Data System (ADS)
Siswantyo, Sepha; Susanti, Bety Hayat
2016-02-01
Preneel-Govaerts-Vandewalle (PGV) schemes consist of 64 possible single-block-length schemes that can be used to build a hash function based on block ciphers. For those 64 schemes, Preneel claimed that 4 schemes are secure. In this paper, we apply length extension attack on those 4 secure PGV schemes which use RC5 algorithm in its basic construction to test their collision resistance property. The attack result shows that the collision occurred on those 4 secure PGV schemes. Based on the analysis, we indicate that Feistel structure and data dependent rotation operation in RC5 algorithm, XOR operations on the scheme, along with selection of additional message block value also give impact on the collision to occur.
Enhancing the LVRT Capability of PMSG-Based Wind Turbines Based on R-SFCL
NASA Astrophysics Data System (ADS)
Xu, Lin; Lin, Ruixing; Ding, Lijie; Huang, Chunjun
2018-03-01
A novel low voltage ride-through (LVRT) scheme for PMSG-based wind turbines based on the Resistor Superconducting Fault Current Limiter (R-SFCL) is proposed in this paper. The LVRT scheme is mainly formed by R-SFCL in series between the transformer and the Grid Side Converter (GSC), and basic modelling has been discussed in detail. The proposed LVRT scheme is implemented to interact with PMSG model in PSCAD/EMTDC under three phase short circuit fault condition, which proves that the proposed scheme based on R-SFCL can improve the transient performance and LVRT capability to consolidate grid connection with wind turbines.
A Novel Scheme for Bidirectional and Hybrid Quantum Information Transmission via a Seven-Qubit State
NASA Astrophysics Data System (ADS)
Fang, Sheng-hui; Jiang, Min
2018-02-01
In this paper, we present a novel scheme for bidirectional and hybrid quantum information transmission via a seven-qubit state. We demonstrate that under the control of the supervisor two distant participants can simultaneously and deterministically exchange their states with each other no matter whether they know the states or not. In our scheme, Alice can teleport an arbitrary single-qubit state (two-qubit state) to Bob and Bob can prepare a known two-qubit state (single-qubit state) for Alice simultaneously via the control of the supervisor Charlie. Compared with previous studies for single bidirectional quantum teleportation or single bidirectional remote state preparation schemes, our protocol is a kind of hybrid approach for quantum information transmission. Furthermore, it achieves success with unit probability. Notably, since only pauli operations and two-qubit and single-qubit measurements are used in our schemes, it is flexible in physical experiments.
VLSI Technology for Cognitive Radio
NASA Astrophysics Data System (ADS)
VIJAYALAKSHMI, B.; SIDDAIAH, P.
2017-08-01
One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.
Teleportation of a two-mode entangled coherent state encoded with two-qubit information
NASA Astrophysics Data System (ADS)
Mishra, Manoj K.; Prakash, Hari
2010-09-01
We propose a scheme to teleport a two-mode entangled coherent state encoded with two-qubit information, which is better than the two schemes recently proposed by Liao and Kuang (2007 J. Phys. B: At. Mol. Opt. Phys. 40 1183) and by Phien and Nguyen (2008 Phys. Lett. A 372 2825) in that our scheme gives higher value of minimum assured fidelity and minimum average fidelity without using any nonlinear interactions. For involved coherent states | ± αrang, minimum average fidelity in our case is >=0.99 for |α| >= 1.6 (i.e. |α|2 >= 2.6), while previously proposed schemes referred above report the same for |α| >= 5 (i.e. |α|2 >= 25). Since it is very challenging to produce superposed coherent states of high coherent amplitude (|α|), our teleportation scheme is at the reach of modern technology.
Calculations of steady and transient channel flows with a time-accurate L-U factorization scheme
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1991-01-01
Calculations of steady and unsteady, transonic, turbulent channel flows with a time accurate, lower-upper (L-U) factorization scheme are presented. The L-U factorization scheme is formally second-order accurate in time and space, and it is an extension of the steady state flow solver (RPLUS) used extensively to solve compressible flows. A time discretization method and the implementation of a consistent boundary condition specific to the L-U factorization scheme are also presented. The turbulence is described by the Baldwin-Lomax algebraic turbulence model. The present L-U scheme yields stable numerical results with the use of much smaller artificial dissipations than those used in the previous steady flow solver for steady and unsteady channel flows. The capability to solve time dependent flows is shown by solving very weakly excited and strongly excited, forced oscillatory, channel flows.
A robust anonymous biometric-based authenticated key agreement scheme for multi-server environments
Huang, Yuanfei; Ma, Fangchao
2017-01-01
In order to improve the security in remote authentication systems, numerous biometric-based authentication schemes using smart cards have been proposed. Recently, Moon et al. presented an authentication scheme to remedy the flaws of Lu et al.’s scheme, and claimed that their improved protocol supports the required security properties. Unfortunately, we found that Moon et al.’s scheme still has weaknesses. In this paper, we show that Moon et al.’s scheme is vulnerable to insider attack, server spoofing attack, user impersonation attack and guessing attack. Furthermore, we propose a robust anonymous multi-server authentication scheme using public key encryption to remove the aforementioned problems. From the subsequent formal and informal security analysis, we demonstrate that our proposed scheme provides strong mutual authentication and satisfies the desirable security requirements. The functional and performance analysis shows that the improved scheme has the best secure functionality and is computational efficient. PMID:29121050