Sample records for processing scheme based

  1. Research to Assembly Scheme for Satellite Deck Based on Robot Flexibility Control Principle

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Hu, Ruiqin; Xiao, Zhengyi; Zhao, Jingjing; Fang, Zhikai

    2018-03-01

    Deck assembly is critical quality control point in final satellite assembly process, and cable extrusion and structure collision problems in assembly process will affect development quality and progress of satellite directly. Aimed at problems existing in deck assembly process, assembly project scheme for satellite deck based on robot flexibility control principle is proposed in this paper. Scheme is introduced firstly; secondly, key technologies on end force perception and flexible docking control in the scheme are studied; then, implementation process of assembly scheme for satellite deck is described in detail; finally, actual application case of assembly scheme is given. Result shows that compared with traditional assembly scheme, assembly scheme for satellite deck based on robot flexibility control principle has obvious advantages in work efficiency, reliability and universality aspects etc.

  2. 78 FR 40627 - Prohibitions and Conditions on the Importation and Exportation of Rough Diamonds

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-08

    ... November 5, 2002, the launch of the Kimberley Process Certification Scheme (KPCS) for rough diamonds. Under... implements the Kimberley Process Certification Scheme (KPCS) for rough diamonds. The KPCS is a process, based... not been controlled through the Kimberley Process Certification Scheme. By Executive Order 13312 dated...

  3. Variance reduction for Fokker–Planck based particle Monte Carlo schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick

    Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less

  4. NMRPipe: a multidimensional spectral processing system based on UNIX pipes.

    PubMed

    Delaglio, F; Grzesiek, S; Vuister, G W; Zhu, G; Pfeifer, J; Bax, A

    1995-11-01

    The NMRPipe system is a UNIX software environment of processing, graphics, and analysis tools designed to meet current routine and research-oriented multidimensional processing requirements, and to anticipate and accommodate future demands and developments. The system is based on UNIX pipes, which allow programs running simultaneously to exchange streams of data under user control. In an NMRPipe processing scheme, a stream of spectral data flows through a pipeline of processing programs, each of which performs one component of the overall scheme, such as Fourier transformation or linear prediction. Complete multidimensional processing schemes are constructed as simple UNIX shell scripts. The processing modules themselves maintain and exploit accurate records of data sizes, detection modes, and calibration information in all dimensions, so that schemes can be constructed without the need to explicitly define or anticipate data sizes or storage details of real and imaginary channels during processing. The asynchronous pipeline scheme provides other substantial advantages, including high flexibility, favorable processing speeds, choice of both all-in-memory and disk-bound processing, easy adaptation to different data formats, simpler software development and maintenance, and the ability to distribute processing tasks on multi-CPU computers and computer networks.

  5. Dynamic neural network-based methods for compensation of nonlinear effects in multimode communication lines

    NASA Astrophysics Data System (ADS)

    Sidelnikov, O. S.; Redyuk, A. A.; Sygletos, S.

    2017-12-01

    We consider neural network-based schemes of digital signal processing. It is shown that the use of a dynamic neural network-based scheme of signal processing ensures an increase in the optical signal transmission quality in comparison with that provided by other methods for nonlinear distortion compensation.

  6. A novel all-optical label processing for OPS networks based on multiple OOC sequences from multiple-groups OOC

    NASA Astrophysics Data System (ADS)

    Qiu, Kun; Zhang, Chongfu; Ling, Yun; Wang, Yibo

    2007-11-01

    This paper proposes an all-optical label processing scheme using multiple optical orthogonal codes sequences (MOOCS) for optical packet switching (OPS) (MOOCS-OPS) networks, for the first time to the best of our knowledge. In this scheme, the multiple optical orthogonal codes (MOOC) from multiple-groups optical orthogonal codes (MGOOC) are permuted and combined to obtain the MOOCS for the optical labels, which are used to effectively enlarge the capacity of available optical codes for optical labels. The optical label processing (OLP) schemes are reviewed and analyzed, the principles of MOOCS-based optical labels for OPS networks are given, and analyzed, then the MOOCS-OPS topology and the key realization units of the MOOCS-based optical label packets are studied in detail, respectively. The performances of this novel all-optical label processing technology are analyzed, the corresponding simulation is performed. These analysis and results show that the proposed scheme can overcome the lack of available optical orthogonal codes (OOC)-based optical labels due to the limited number of single OOC for optical label with the short code length, and indicate that the MOOCS-OPS scheme is feasible.

  7. Resource Management Scheme Based on Ubiquitous Data Analysis

    PubMed Central

    Lee, Heung Ki; Jung, Jaehee

    2014-01-01

    Resource management of the main memory and process handler is critical to enhancing the system performance of a web server. Owing to the transaction delay time that affects incoming requests from web clients, web server systems utilize several web processes to anticipate future requests. This procedure is able to decrease the web generation time because there are enough processes to handle the incoming requests from web browsers. However, inefficient process management results in low service quality for the web server system. Proper pregenerated process mechanisms are required for dealing with the clients' requests. Unfortunately, it is difficult to predict how many requests a web server system is going to receive. If a web server system builds too many web processes, it wastes a considerable amount of memory space, and thus performance is reduced. We propose an adaptive web process manager scheme based on the analysis of web log mining. In the proposed scheme, the number of web processes is controlled through prediction of incoming requests, and accordingly, the web process management scheme consumes the least possible web transaction resources. In experiments, real web trace data were used to prove the improved performance of the proposed scheme. PMID:25197692

  8. System-wide hybrid MPC-PID control of a continuous pharmaceutical tablet manufacturing process via direct compaction.

    PubMed

    Singh, Ravendra; Ierapetritou, Marianthi; Ramachandran, Rohit

    2013-11-01

    The next generation of QbD based pharmaceutical products will be manufactured through continuous processing. This will allow the integration of online/inline monitoring tools, coupled with an efficient advanced model-based feedback control systems, to achieve precise control of process variables, so that the predefined product quality can be achieved consistently. The direct compaction process considered in this study is highly interactive and involves time delays for a number of process variables due to sensor placements, process equipment dimensions, and the flow characteristics of the solid material. A simple feedback regulatory control system (e.g., PI(D)) by itself may not be sufficient to achieve the tight process control that is mandated by regulatory authorities. The process presented herein comprises of coupled dynamics involving slow and fast responses, indicating the requirement of a hybrid control scheme such as a combined MPC-PID control scheme. In this manuscript, an efficient system-wide hybrid control strategy for an integrated continuous pharmaceutical tablet manufacturing process via direct compaction has been designed. The designed control system is a hybrid scheme of MPC-PID control. An effective controller parameter tuning strategy involving an ITAE method coupled with an optimization strategy has been used for tuning of both MPC and PID parameters. The designed hybrid control system has been implemented in a first-principles model-based flowsheet that was simulated in gPROMS (Process System Enterprise). Results demonstrate enhanced performance of critical quality attributes (CQAs) under the hybrid control scheme compared to only PID or MPC control schemes, illustrating the potential of a hybrid control scheme in improving pharmaceutical manufacturing operations. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Intelligent Power Swing Detection Scheme to Prevent False Relay Tripping Using S-Transform

    NASA Astrophysics Data System (ADS)

    Mohamad, Nor Z.; Abidin, Ahmad F.; Musirin, Ismail

    2014-06-01

    Distance relay design is equipped with out-of-step tripping scheme to ensure correct distance relay operation during power swing. The out-of-step condition is a consequence result from unstable power swing. It requires proper detection of power swing to initiate a tripping signal followed by separation of unstable part from the entire power system. The distinguishing process of unstable swing from stable swing poses a challenging task. This paper presents an intelligent approach to detect power swing based on S-Transform signal processing tool. The proposed scheme is based on the use of S-Transform feature of active power at the distance relay measurement point. It is demonstrated that the proposed scheme is able to detect and discriminate the unstable swing from stable swing occurring in the system. To ascertain validity of the proposed scheme, simulations were carried out with the IEEE 39 bus system and its performance has been compared with the wavelet transform-based power swing detection scheme.

  10. Dynamically protected cat-qubits: a new paradigm for universal quantum computation

    NASA Astrophysics Data System (ADS)

    Mirrahimi, Mazyar; Leghtas, Zaki; Albert, Victor V.; Touzard, Steven; Schoelkopf, Robert J.; Jiang, Liang; Devoret, Michel H.

    2014-04-01

    We present a new hardware-efficient paradigm for universal quantum computation which is based on encoding, protecting and manipulating quantum information in a quantum harmonic oscillator. This proposal exploits multi-photon driven dissipative processes to encode quantum information in logical bases composed of Schrödinger cat states. More precisely, we consider two schemes. In a first scheme, a two-photon driven dissipative process is used to stabilize a logical qubit basis of two-component Schrödinger cat states. While such a scheme ensures a protection of the logical qubit against the photon dephasing errors, the prominent error channel of single-photon loss induces bit-flip type errors that cannot be corrected. Therefore, we consider a second scheme based on a four-photon driven dissipative process which leads to the choice of four-component Schrödinger cat states as the logical qubit. Such a logical qubit can be protected against single-photon loss by continuous photon number parity measurements. Next, applying some specific Hamiltonians, we provide a set of universal quantum gates on the encoded qubits of each of the two schemes. In particular, we illustrate how these operations can be rendered fault-tolerant with respect to various decoherence channels of participating quantum systems. Finally, we also propose experimental schemes based on quantum superconducting circuits and inspired by methods used in Josephson parametric amplification, which should allow one to achieve these driven dissipative processes along with the Hamiltonians ensuring the universal operations in an efficient manner.

  11. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage

    PubMed Central

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model. PMID:27898703

  12. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage.

    PubMed

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model.

  13. A Microphysics-Based Black Carbon Aging Scheme in a Global Chemical Transport Model: Constraints from HIPPO Observations

    NASA Astrophysics Data System (ADS)

    He, C.; Li, Q.; Liou, K. N.; Qi, L.; Tao, S.; Schwarz, J. P.

    2015-12-01

    Black carbon (BC) aging significantly affects its distributions and radiative properties, which is an important uncertainty source in estimating BC climatic effects. Global models often use a fixed aging timescale for the hydrophobic-to-hydrophilic BC conversion or a simple parameterization. We have developed and implemented a microphysics-based BC aging scheme that accounts for condensation and coagulation processes into a global 3-D chemical transport model (GEOS-Chem). Model results are systematically evaluated by comparing with the HIPPO observations across the Pacific (67°S-85°N) during 2009-2011. We find that the microphysics-based scheme substantially increases the BC aging rate over source regions as compared with the fixed aging timescale (1.2 days), due to the condensation of sulfate and secondary organic aerosols (SOA) and coagulation with pre-existing hydrophilic aerosols. However, the microphysics-based scheme slows down BC aging over Polar regions where condensation and coagulation are rather weak. We find that BC aging is primarily dominated by condensation process that accounts for ~75% of global BC aging, while the coagulation process is important over source regions where a large amount of pre-existing aerosols are available. Model results show that the fixed aging scheme tends to overestimate BC concentrations over the Pacific throughout the troposphere by a factor of 2-5 at different latitudes, while the microphysics-based scheme reduces the discrepancies by up to a factor of 2, particularly in the middle troposphere. The microphysics-based scheme developed in this work decreases BC column total concentrations at all latitudes and seasons, especially over tropical regions, leading to large improvement in model simulations. We are presently analyzing the impact of this scheme on global BC budget and lifetime, quantifying its uncertainty associated with key parameters, and investigating the effects of heterogeneous chemical oxidation on BC aging.

  14. A novel all-optical label processing based on multiple optical orthogonal codes sequences for optical packet switching networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chongfu; Qiu, Kun; Xu, Bo; Ling, Yun

    2008-05-01

    This paper proposes an all-optical label processing scheme that uses the multiple optical orthogonal codes sequences (MOOCS)-based optical label for optical packet switching (OPS) (MOOCS-OPS) networks. In this scheme, each MOOCS is a permutation or combination of the multiple optical orthogonal codes (MOOC) selected from the multiple-groups optical orthogonal codes (MGOOC). Following a comparison of different optical label processing (OLP) schemes, the principles of MOOCS-OPS network are given and analyzed. Firstly, theoretical analyses are used to prove that MOOCS is able to greatly enlarge the number of available optical labels when compared to the previous single optical orthogonal code (SOOC) for OPS (SOOC-OPS) network. Then, the key units of the MOOCS-based optical label packets, including optical packet generation, optical label erasing, optical label extraction and optical label rewriting etc., are given and studied. These results are used to verify that the proposed MOOCS-OPS scheme is feasible.

  15. Image encryption based on a delayed fractional-order chaotic logistic system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na

    2012-05-01

    A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.

  16. Probabilistic Broadcast-Based Multiparty Remote State Preparation scheme via Four-Qubit Cluster State

    NASA Astrophysics Data System (ADS)

    Zhou, Yun-Jing; Tao, Yuan-Hong

    2018-02-01

    In this letter,we propose a broadcast-based multiparty remote state preparation scheme which realizes the process among three participants. It allows two distant receivers to obtain the arbitrary single-qubit states separately and simultaneously, and the success probability is {d2}/{1+d2}, thus generalize the results in Yu et al. (Quantum. Inf. Process 16(2), 41, 2017).

  17. An improved scheme for Flip-OFDM based on Hartley transform in short-range IM/DD systems.

    PubMed

    Zhou, Ji; Qiao, Yaojun; Cai, Zhuo; Ji, Yuefeng

    2014-08-25

    In this paper, an improved Flip-OFDM scheme is proposed for IM/DD optical systems, where the modulation/demodulation processing takes advantage of the fast Hartley transform (FHT) algorithm. We realize the improved scheme in one symbol period while conventional Flip-OFDM scheme based on fast Fourier transform (FFT) in two consecutive symbol periods. So the complexity of many operations in improved scheme is half of that in conventional scheme, such as CP operation, polarity inversion and symbol delay. Compared to FFT with complex input constellation, the complexity of FHT with real input constellation is halved. The transmission experiment over 50-km SSMF has been realized to verify the feasibility of improved scheme. In conclusion, the improved scheme has the same BER performance with conventional scheme, but great superiority on complexity.

  18. A genetic fuzzy analytical hierarchy process based projection pursuit method for selecting schemes of water transportation projects

    NASA Astrophysics Data System (ADS)

    Jin, Juliang; Li, Lei; Wang, Wensheng; Zhang, Ming

    2006-10-01

    The optimal selection of schemes of water transportation projects is a process of choosing a relatively optimal scheme from a number of schemes of water transportation programming and management projects, which is of importance in both theory and practice in water resource systems engineering. In order to achieve consistency and eliminate the dimensions of fuzzy qualitative and fuzzy quantitative evaluation indexes, to determine the weights of the indexes objectively, and to increase the differences among the comprehensive evaluation index values of water transportation project schemes, a projection pursuit method, named FPRM-PP for short, was developed in this work for selecting the optimal water transportation project scheme based on the fuzzy preference relation matrix. The research results show that FPRM-PP is intuitive and practical, the correction range of the fuzzy preference relation matrix A it produces is relatively small, and the result obtained is both stable and accurate; therefore FPRM-PP can be widely used in the optimal selection of different multi-factor decision-making schemes.

  19. Gas-Kinetic Theory Based Flux Splitting Method for Ideal Magnetohydrodynamics

    NASA Technical Reports Server (NTRS)

    Xu, Kun

    1998-01-01

    A gas-kinetic solver is developed for the ideal magnetohydrodynamics (MHD) equations. The new scheme is based on the direct splitting of the flux function of the MHD equations with the inclusion of "particle" collisions in the transport process. Consequently, the artificial dissipation in the new scheme is much reduced in comparison with the MHD Flux Vector Splitting Scheme. At the same time, the new scheme is compared with the well-developed Roe-type MHD solver. It is concluded that the kinetic MHD scheme is more robust and efficient than the Roe- type method, and the accuracy is competitive. In this paper the general principle of splitting the macroscopic flux function based on the gas-kinetic theory is presented. The flux construction strategy may shed some light on the possible modification of AUSM- and CUSP-type schemes for the compressible Euler equations, as well as to the development of new schemes for a non-strictly hyperbolic system.

  20. Quantum nondemolition measurement of the Werner state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin Jiasen; Yu Changshui; Pei Pei

    2010-10-15

    We propose a theoretical scheme of quantum nondemolition measurement of two-qubit Werner state. We discuss our scheme with the two qubits restricted in a local place and then extend the scheme to the case in which two qubits are separated. We also consider the experimental realization of our scheme based on cavity quantum electrodynamics. It is very interesting that our scheme is robust against the dissipative effects introduced by the probe process. We also give a brief interpretation of our scheme finally.

  1. Nuclear Explosion and Infrasound Event Resources of the SMDC Monitoring Research Program

    DTIC Science & Technology

    2008-09-01

    2008 Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring Technologies 928 Figure 7. Dozens of detected infrasound signals from...investigate alternative detection schemes at the two infrasound arrays based on frequency-wavenumber (fk) processing and the F-statistic. The results of... infrasound signal - detection processing schemes. REFERENCES Bahavar, M., B. Barker, J. Bennett, R. Bowman, H. Israelsson, B. Kohl, Y-L. Kung, J. Murphy

  2. UV-Ilmenite based photo-catalysis in lignin based black liquor

    NASA Astrophysics Data System (ADS)

    Amriani, F.; Abimanyu, H.; Natsir, M.; Sutrizal, L.; Nursin, A.

    2018-03-01

    Ilmenite can be found abundantly in iron sand from sea shore along Wolowo beach in Button district, Southeast Sulawesi, Indonesia. The ability of ilmenite in degrading lignin in black liquor has been investigated. The results of lignin degradation process in black liquor are supposed to be the potential resources for fungicide such as coniferyl, sinapyl, and p-coumaryl alcohol. The process was conducted in 10 watt ultraviolet (UV) light chamber with two parameters applying include exposure time and ilmenite composition. Two scheme of process are used by differentiating the feed, raw black liquor (scheme 1) and the liquor after adding of 1% sodium hydroxide into lignin-based sludge (scheme 2). Decolourization and lignin degradation analysis after the process were conducted by using UV-Vis spectrophotometer and LCMS, respectively. The results showed that the treatment from the scheme 1 was better than the scheme 2. Both lignin degradation and decolourization can effectively result in more than 31% by using 0.3 g ilmenite for 10 minutes UV exposure. The interim analysis by liquid chromatography-mass spectrophotometer (LCMS) exhibits the suspected target in range 309.4 to 311.39 g/mol as p-coumaryl alcohol while two other targets did not found in chromatogram. Thus, this research requires further evaluation and development to maximise the degradation result so the final goal can be achieved successfully.

  3. Local finite element enrichment strategies for 2D contact computations and a corresponding post-processing scheme

    NASA Astrophysics Data System (ADS)

    Sauer, Roger A.

    2013-08-01

    Recently an enriched contact finite element formulation has been developed that substantially increases the accuracy of contact computations while keeping the additional numerical effort at a minimum reported by Sauer (Int J Numer Meth Eng, 87: 593-616, 2011). Two enrich-ment strategies were proposed, one based on local p-refinement using Lagrange interpolation and one based on Hermite interpolation that produces C 1-smoothness on the contact surface. Both classes, which were initially considered for the frictionless Signorini problem, are extended here to friction and contact between deformable bodies. For this, a symmetric contact formulation is used that allows the unbiased treatment of both contact partners. This paper also proposes a post-processing scheme for contact quantities like the contact pressure. The scheme, which provides a more accurate representation than the raw data, is based on an averaging procedure that is inspired by mortar formulations. The properties of the enrichment strategies and the corresponding post-processing scheme are illustrated by several numerical examples considering sliding and peeling contact in the presence of large deformations.

  4. Two Dimensional Finite Element Based Magnetotelluric Inversion using Singular Value Decomposition Method on Transverse Electric Mode

    NASA Astrophysics Data System (ADS)

    Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy

    2018-04-01

    In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.

  5. Asynchronous error-correcting secure communication scheme based on fractional-order shifting chaotic system

    NASA Astrophysics Data System (ADS)

    Chao, Luo

    2015-11-01

    In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.

  6. Demonstration of spatial-light-modulation-based four-wave mixing in cold atoms

    NASA Astrophysics Data System (ADS)

    Juo, Jz-Yuan; Lin, Jia-Kang; Cheng, Chin-Yao; Liu, Zi-Yu; Yu, Ite A.; Chen, Yong-Fan

    2018-05-01

    Long-distance quantum optical communications usually require efficient wave-mixing processes to convert the wavelengths of single photons. Many quantum applications based on electromagnetically induced transparency (EIT) have been proposed and demonstrated at the single-photon level, such as quantum memories, all-optical transistors, and cross-phase modulations. However, EIT-based four-wave mixing (FWM) in a resonant double-Λ configuration has a maximum conversion efficiency (CE) of 25% because of absorptive loss due to spontaneous emission. An improved scheme using spatially modulated intensities of two control fields has been theoretically proposed to overcome this conversion limit. In this study, we first demonstrate wavelength conversion from 780 to 795 nm with a 43% CE by using this scheme at an optical density (OD) of 19 in cold 87Rb atoms. According to the theoretical model, the CE in the proposed scheme can further increase to 96% at an OD of 240 under ideal conditions, thereby attaining an identical CE to that of the previous nonresonant double-Λ scheme at half the OD. This spatial-light-modulation-based FWM scheme can achieve a near-unity CE, thus providing an easy method of implementing an efficient quantum wavelength converter for all-optical quantum information processing.

  7. Stabilization and analytical tuning rule of double-loop control scheme for unstable dead-time process

    NASA Astrophysics Data System (ADS)

    Ugon, B.; Nandong, J.; Zang, Z.

    2017-06-01

    The presence of unstable dead-time systems in process plants often leads to a daunting challenge in the design of standard PID controllers, which are not only intended to provide close-loop stability but also to give good performance-robustness overall. In this paper, we conduct stability analysis on a double-loop control scheme based on the Routh-Hurwitz stability criteria. We propose to use this unstable double-loop control scheme which employs two P/PID controllers to control first-order or second-order unstable dead-time processes typically found in process industries. Based on the Routh-Hurwitz stability necessary and sufficient criteria, we establish several stability regions which enclose within them the P/PID parameter values that guarantee close-loop stability of the double-loop control scheme. A systematic tuning rule is developed for the purpose of obtaining the optimal P/PID parameter values within the established regions. The effectiveness of the proposed tuning rule is demonstrated using several numerical examples and the result are compared with some well-established tuning methods reported in the literature.

  8. Simplified Interval Observer Scheme: A New Approach for Fault Diagnosis in Instruments

    PubMed Central

    Martínez-Sibaja, Albino; Astorga-Zaragoza, Carlos M.; Alvarado-Lassman, Alejandro; Posada-Gómez, Rubén; Aguila-Rodríguez, Gerardo; Rodríguez-Jarquin, José P.; Adam-Medina, Manuel

    2011-01-01

    There are different schemes based on observers to detect and isolate faults in dynamic processes. In the case of fault diagnosis in instruments (FDI) there are different diagnosis schemes based on the number of observers: the Simplified Observer Scheme (SOS) only requires one observer, uses all the inputs and only one output, detecting faults in one detector; the Dedicated Observer Scheme (DOS), which again uses all the inputs and just one output, but this time there is a bank of observers capable of locating multiple faults in sensors, and the Generalized Observer Scheme (GOS) which involves a reduced bank of observers, where each observer uses all the inputs and m-1 outputs, and allows the localization of unique faults. This work proposes a new scheme named Simplified Interval Observer SIOS-FDI, which does not requires the measurement of any input and just with just one output allows the detection of unique faults in sensors and because it does not require any input, it simplifies in an important way the diagnosis of faults in processes in which it is difficult to measure all the inputs, as in the case of biologic reactors. PMID:22346593

  9. Computer-based creativity enhanced conceptual design model for non-routine design of mechanical systems

    NASA Astrophysics Data System (ADS)

    Li, Yutong; Wang, Yuxin; Duffy, Alex H. B.

    2014-11-01

    Computer-based conceptual design for routine design has made great strides, yet non-routine design has not been given due attention, and it is still poorly automated. Considering that the function-behavior-structure(FBS) model is widely used for modeling the conceptual design process, a computer-based creativity enhanced conceptual design model(CECD) for non-routine design of mechanical systems is presented. In the model, the leaf functions in the FBS model are decomposed into and represented with fine-grain basic operation actions(BOA), and the corresponding BOA set in the function domain is then constructed. Choosing building blocks from the database, and expressing their multiple functions with BOAs, the BOA set in the structure domain is formed. Through rule-based dynamic partition of the BOA set in the function domain, many variants of regenerated functional schemes are generated. For enhancing the capability to introduce new design variables into the conceptual design process, and dig out more innovative physical structure schemes, the indirect function-structure matching strategy based on reconstructing the combined structure schemes is adopted. By adjusting the tightness of the partition rules and the granularity of the divided BOA subsets, and making full use of the main function and secondary functions of each basic structure in the process of reconstructing of the physical structures, new design variables and variants are introduced into the physical structure scheme reconstructing process, and a great number of simpler physical structure schemes to accomplish the overall function organically are figured out. The creativity enhanced conceptual design model presented has a dominant capability in introducing new deign variables in function domain and digging out simpler physical structures to accomplish the overall function, therefore it can be utilized to solve non-routine conceptual design problem.

  10. CP-ABE Based Privacy-Preserving User Profile Matching in Mobile Social Networks

    PubMed Central

    Cui, Weirong; Du, Chenglie; Chen, Jinchao

    2016-01-01

    Privacy-preserving profile matching, a challenging task in mobile social networks, is getting more attention in recent years. In this paper, we propose a novel scheme that is based on ciphertext-policy attribute-based encryption to tackle this problem. In our scheme, a user can submit a preference-profile and search for users with matching-profile in decentralized mobile social networks. In this process, no participant’s profile and the submitted preference-profile is exposed. Meanwhile, a secure communication channel can be established between the pair of successfully matched users. In contrast to existing related schemes which are mainly based on the secure multi-party computation, our scheme can provide verifiability (both the initiator and any unmatched user cannot cheat each other to pretend to be matched), and requires few interactions among users. We provide thorough security analysis and performance evaluation on our scheme, and show its advantages in terms of security, efficiency and usability over state-of-the-art schemes. PMID:27337001

  11. CP-ABE Based Privacy-Preserving User Profile Matching in Mobile Social Networks.

    PubMed

    Cui, Weirong; Du, Chenglie; Chen, Jinchao

    2016-01-01

    Privacy-preserving profile matching, a challenging task in mobile social networks, is getting more attention in recent years. In this paper, we propose a novel scheme that is based on ciphertext-policy attribute-based encryption to tackle this problem. In our scheme, a user can submit a preference-profile and search for users with matching-profile in decentralized mobile social networks. In this process, no participant's profile and the submitted preference-profile is exposed. Meanwhile, a secure communication channel can be established between the pair of successfully matched users. In contrast to existing related schemes which are mainly based on the secure multi-party computation, our scheme can provide verifiability (both the initiator and any unmatched user cannot cheat each other to pretend to be matched), and requires few interactions among users. We provide thorough security analysis and performance evaluation on our scheme, and show its advantages in terms of security, efficiency and usability over state-of-the-art schemes.

  12. Image processing via level set curvature flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malladi, R.; Sethian, J.A.

    We present a controlled image smoothing and enhancement method based on a curvature flow interpretation of the geometric heat equation. Compared to existing techniques, the model has several distinct advantages. (i) It contains just one enhancement parameter. (ii) The scheme naturally inherits a stopping criterion from the image; continued application of the scheme produces no further change. (iii) The method is one of the fastest possible schemes based on a curvature-controlled approach. 15 ref., 6 figs.

  13. A CPT for Improving Turbulence and Cloud Processes in the NCEP Global Models

    NASA Astrophysics Data System (ADS)

    Krueger, S. K.; Moorthi, S.; Randall, D. A.; Pincus, R.; Bogenschutz, P.; Belochitski, A.; Chikira, M.; Dazlich, D. A.; Swales, D. J.; Thakur, P. K.; Yang, F.; Cheng, A.

    2016-12-01

    Our Climate Process Team (CPT) is based on the premise that the NCEP (National Centers for Environmental Prediction) global models can be improved by installing an integrated, self-consistent description of turbulence, clouds, deep convection, and the interactions between clouds and radiative and microphysical processes. The goal of our CPT is to unify the representation of turbulence and subgrid-scale (SGS) cloud processes and to unify the representation of SGS deep convective precipitation and grid-scale precipitation as the horizontal resolution decreases. We aim to improve the representation of small-scale phenomena by implementing a PDF-based SGS turbulence and cloudiness scheme that replaces the boundary layer turbulence scheme, the shallow convection scheme, and the cloud fraction schemes in the GFS (Global Forecast System) and CFS (Climate Forecast System) global models. We intend to improve the treatment of deep convection by introducing a unified parameterization that scales continuously between the simulation of individual clouds when and where the grid spacing is sufficiently fine and the behavior of a conventional parameterization of deep convection when and where the grid spacing is coarse. We will endeavor to improve the representation of the interactions of clouds, radiation, and microphysics in the GFS/CFS by using the additional information provided by the PDF-based SGS cloud scheme. The team is evaluating the impacts of the model upgrades with metrics used by the NCEP short-range and seasonal forecast operations.

  14. LDPC-PPM Coding Scheme for Optical Communication

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael

    2009-01-01

    In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.

  15. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    PubMed

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  16. Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, S.

    2002-07-01

    As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent ofmore » this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)« less

  17. A New Proxy Electronic Voting Scheme Achieved by Six-Particle Entangled States

    NASA Astrophysics Data System (ADS)

    Cao, Hai-Jing; Ding, Li-Yuan; Jiang, Xiu-Li; Li, Peng-Fei

    2018-03-01

    In this paper, we use quantum proxy signature to construct a new secret electronic voting scheme. In our scheme, six particles entangled states function as quantum channels. The voter Alice, the Vote Management Center Bob, the scrutineer Charlie only perform two particles measurements on the Bell bases to realize the electronic voting process. So the scheme reduces the technical difficulty and increases operation efficiency. We use quantum key distribution and one-time pad to guarantee its unconditional security. The significant advantage of our scheme is that transmitted information capacity is twice as much as the capacity of other schemes.

  18. Efficient demodulation scheme for rolling-shutter-patterning of CMOS image sensor based visible light communications.

    PubMed

    Chen, Chia-Wei; Chow, Chi-Wai; Liu, Yang; Yeh, Chien-Hung

    2017-10-02

    Recently even the low-end mobile-phones are equipped with a high-resolution complementary-metal-oxide-semiconductor (CMOS) image sensor. This motivates using a CMOS image sensor for visible light communication (VLC). Here we propose and demonstrate an efficient demodulation scheme to synchronize and demodulate the rolling shutter pattern in image sensor based VLC. The implementation algorithm is discussed. The bit-error-rate (BER) performance and processing latency are evaluated and compared with other thresholding schemes.

  19. Improved Group Signature Scheme Based on Quantum Teleportation

    NASA Astrophysics Data System (ADS)

    Su, Qi; Li, Wen-Min

    2014-04-01

    Recently, Wen et al. proposed a group signature scheme based on quantum teleportation (Wen et al. 81(5):055001, 2010). In this paper, we find that it is vulnerable to the inside attack, by which all other legal members of the group can forge the signature utilizing the anti-commutative relationship between the Pauli operation Y and the encryption operation H, and the public board. Then we present an improved scheme where the eavesdropping process after the transmission is involved to increase the security.

  20. Pricing schemes for new drugs: a welfare analysis.

    PubMed

    Levaggi, Rosella

    2014-02-01

    Drug price regulation is acquiring increasing significance in the investment choices of the pharmaceutical sector. The overall objective is to determine an optimal trade-off between the incentives for innovation, consumer protection, and value for money. However, price regulation is itself a source of distortion. In this study, we examine the welfare properties of listing through a bargaining process and value-based pricing schemes. The latter are superior instruments to uncertain listing processes for maximising total welfare, but the distribution of the benefits between consumers and the industry depends on rate of rebate chosen by the regulator. However, through an appropriate choice, it is always possible to define a value-based pricing scheme with risk sharing, which both consumers and the industry prefer to an uncertain bargaining process. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. From stochastic processes to numerical methods: A new scheme for solving reaction subdiffusion fractional partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angstmann, C.N.; Donnelly, I.C.; Henry, B.I., E-mail: B.Henry@unsw.edu.au

    We have introduced a new explicit numerical method, based on a discrete stochastic process, for solving a class of fractional partial differential equations that model reaction subdiffusion. The scheme is derived from the master equations for the evolution of the probability density of a sum of discrete time random walks. We show that the diffusion limit of the master equations recovers the fractional partial differential equation of interest. This limiting procedure guarantees the consistency of the numerical scheme. The positivity of the solution and stability results are simply obtained, provided that the underlying process is well posed. We also showmore » that the method can be applied to standard reaction–diffusion equations. This work highlights the broader applicability of using discrete stochastic processes to provide numerical schemes for partial differential equations, including fractional partial differential equations.« less

  2. ECG compression using non-recursive wavelet transform with quality control

    NASA Astrophysics Data System (ADS)

    Liu, Je-Hung; Hung, King-Chu; Wu, Tsung-Ching

    2016-09-01

    While wavelet-based electrocardiogram (ECG) data compression using scalar quantisation (SQ) yields excellent compression performance, a wavelet's SQ scheme, however, must select a set of multilevel quantisers for each quantisation process. As a result of the properties of multiple-to-one mapping, however, this scheme is not conducive for reconstruction error control. In order to address this problem, this paper presents a single-variable control SQ scheme able to guarantee the reconstruction quality of wavelet-based ECG data compression. Based on the reversible round-off non-recursive discrete periodised wavelet transform (RRO-NRDPWT), the SQ scheme is derived with a three-stage design process that first uses genetic algorithm (GA) for high compression ratio (CR), followed by a quadratic curve fitting for linear distortion control, and the third uses a fuzzy decision-making for minimising data dependency effect and selecting the optimal SQ. The two databases, Physikalisch-Technische Bundesanstalt (PTB) and Massachusetts Institute of Technology (MIT) arrhythmia, are used to evaluate quality control performance. Experimental results show that the design method guarantees a high compression performance SQ scheme with statistically linear distortion. This property can be independent of training data and can facilitate rapid error control.

  3. Statistical process control based chart for information systems security

    NASA Astrophysics Data System (ADS)

    Khan, Mansoor S.; Cui, Lirong

    2015-07-01

    Intrusion detection systems have a highly significant role in securing computer networks and information systems. To assure the reliability and quality of computer networks and information systems, it is highly desirable to develop techniques that detect intrusions into information systems. We put forward the concept of statistical process control (SPC) in computer networks and information systems intrusions. In this article we propose exponentially weighted moving average (EWMA) type quality monitoring scheme. Our proposed scheme has only one parameter which differentiates it from the past versions. We construct the control limits for the proposed scheme and investigate their effectiveness. We provide an industrial example for the sake of clarity for practitioner. We give comparison of the proposed scheme with EWMA schemes and p chart; finally we provide some recommendations for the future work.

  4. Team interaction during surgery: a systematic review of communication coding schemes.

    PubMed

    Tiferes, Judith; Bisantz, Ann M; Guru, Khurshid A

    2015-05-15

    Communication problems have been systematically linked to human errors in surgery and a deep understanding of the underlying processes is essential. Although a number of tools exist to assess nontechnical skills, methods to study communication and other team-related processes are far from being standardized, making comparisons challenging. We conducted a systematic review to analyze methods used to study events in the operating room (OR) and to develop a synthesized coding scheme for OR team communication. Six electronic databases were accessed to search for articles that collected individual events during surgery and included detailed coding schemes. Additional articles were added based on cross-referencing. That collection was then classified based on type of events collected, environment type (real or simulated), number of procedures, type of surgical task, team characteristics, method of data collection, and coding scheme characteristics. All dimensions within each coding scheme were grouped based on emergent content similarity. Categories drawn from articles, which focused on communication events, were further analyzed and synthesized into one common coding scheme. A total of 34 of 949 articles met the inclusion criteria. The methodological characteristics and coding dimensions of the articles were summarized. A priori coding was used in nine studies. The synthesized coding scheme for OR communication included six dimensions as follows: information flow, period, statement type, topic, communication breakdown, and effects of communication breakdown. The coding scheme provides a standardized coding method for OR communication, which can be used to develop a priori codes for future studies especially in comparative effectiveness research. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Experimental verification of self-calibration radiometer based on spontaneous parametric downconversion

    NASA Astrophysics Data System (ADS)

    Gao, Dongyang; Zheng, Xiaobing; Li, Jianjun; Hu, Youbo; Xia, Maopeng; Salam, Abdul; Zhang, Peng

    2018-03-01

    Based on spontaneous parametric downconversion process, we propose a novel self-calibration radiometer scheme which can self-calibrate the degradation of its own response and ultimately monitor the fluctuation of a target radiation. Monitor results were independent of its degradation and not linked to the primary standard detector scale. The principle and feasibility of the proposed scheme were verified by observing bromine-tungsten lamp. A relative standard deviation of 0.39 % was obtained for stable bromine-tungsten lamp. Results show that the proposed scheme is advanced of its principle. The proposed scheme could make a significant breakthrough in the self-calibration issue on the space platform.

  6. Self-match based on polling scheme for passive optical network monitoring

    NASA Astrophysics Data System (ADS)

    Zhang, Xuan; Guo, Hao; Jia, Xinhong; Liao, Qinghua

    2018-06-01

    We propose a self-match based on polling scheme for passive optical network monitoring. Each end-user is equipped with an optical matcher that exploits only the specific length patchcord and two different fiber Bragg gratings with 100% reflectivity. The simple and low-cost scheme can greatly simplify the final recognition processing of the network link status and reduce the sensitivity of the photodetector. We analyze the time-domain relation between reflected pulses and establish the calculation model to evaluate the false alarm rate. The feasibility of the proposed scheme and the validity of the time-domain relation analysis are experimentally demonstrated.

  7. Deficiencies of the cryptography based on multiple-parameter fractional Fourier transform.

    PubMed

    Ran, Qiwen; Zhang, Haiying; Zhang, Jin; Tan, Liying; Ma, Jing

    2009-06-01

    Methods of image encryption based on fractional Fourier transform have an incipient flaw in security. We show that the schemes have the deficiency that one group of encryption keys has many groups of keys to decrypt the encrypted image correctly for several reasons. In some schemes, many factors result in the deficiencies, such as the encryption scheme based on multiple-parameter fractional Fourier transform [Opt. Lett.33, 581 (2008)]. A modified method is proposed to avoid all the deficiencies. Security and reliability are greatly improved without increasing the complexity of the encryption process. (c) 2009 Optical Society of America.

  8. Comparison of two SVD-based color image compression schemes.

    PubMed

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

  9. Comparison of two SVD-based color image compression schemes

    PubMed Central

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451

  10. Advanced mobility handover for mobile IPv6 based wireless networks.

    PubMed

    Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an Advanced Mobility Handover scheme (AMH) in this paper for seamless mobility in MIPv6-based wireless networks. In the proposed scheme, the mobile node utilizes a unique home IPv6 address developed to maintain communication with other corresponding nodes without a care-of-address during the roaming process. The IPv6 address for each MN during the first round of AMH process is uniquely identified by HA using the developed MN-ID field as a global permanent, which is identifying uniquely the IPv6 address of MN. Moreover, a temporary MN-ID is generated by access point each time an MN is associated with a particular AP and temporarily saved in a developed table inside the AP. When employing the AMH scheme, the handover process in the network layer is performed prior to its default time. That is, the mobility handover process in the network layer is tackled by a trigger developed AMH message to the next access point. Thus, a mobile node keeps communicating with the current access point while the network layer handover is executed by the next access point. The mathematical analyses and simulation results show that the proposed scheme performs better as compared with the existing approaches.

  11. A fast CT reconstruction scheme for a general multi-core PC.

    PubMed

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors.

  12. A Fast CT Reconstruction Scheme for a General Multi-Core PC

    PubMed Central

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors. PMID:18256731

  13. A combined approach of self-referencing and Principle Component Thermography for transient, steady, and selective heating scenarios

    NASA Astrophysics Data System (ADS)

    Omar, M. A.; Parvataneni, R.; Zhou, Y.

    2010-09-01

    Proposed manuscript describes the implementation of a two step processing procedure, composed of the self-referencing and the Principle Component Thermography (PCT). The combined approach enables the processing of thermograms from transient (flash), steady (halogen) and selective (induction) thermal perturbations. Firstly, the research discusses the three basic processing schemes typically applied for thermography; namely mathematical transformation based processing, curve-fitting processing, and direct contrast based calculations. Proposed algorithm utilizes the self-referencing scheme to create a sub-sequence that contains the maximum contrast information and also compute the anomalies' depth values. While, the Principle Component Thermography operates on the sub-sequence frames by re-arranging its data content (pixel values) spatially and temporally then it highlights the data variance. The PCT is mainly used as a mathematical mean to enhance the defects' contrast thus enabling its shape and size retrieval. The results show that the proposed combined scheme is effective in processing multiple size defects in sandwich steel structure in real-time (<30 Hz) and with full spatial coverage, without the need for a priori defect-free area.

  14. Implementation of a Cross-Layer Sensing Medium-Access Control Scheme.

    PubMed

    Su, Yishan; Fu, Xiaomei; Han, Guangyao; Xu, Naishen; Jin, Zhigang

    2017-04-10

    In this paper, compressed sensing (CS) theory is utilized in a medium-access control (MAC) scheme for wireless sensor networks (WSNs). We propose a new, cross-layer compressed sensing medium-access control (CL CS-MAC) scheme, combining the physical layer and data link layer, where the wireless transmission in physical layer is considered as a compress process of requested packets in a data link layer according to compressed sensing (CS) theory. We first introduced using compressive complex requests to identify the exact active sensor nodes, which makes the scheme more efficient. Moreover, because the reconstruction process is executed in a complex field of a physical layer, where no bit and frame synchronizations are needed, the asynchronous and random requests scheme can be implemented without synchronization payload. We set up a testbed based on software-defined radio (SDR) to implement the proposed CL CS-MAC scheme practically and to demonstrate the validation. For large-scale WSNs, the simulation results show that the proposed CL CS-MAC scheme provides higher throughput and robustness than the carrier sense multiple access (CSMA) and compressed sensing medium-access control (CS-MAC) schemes.

  15. Emerging Security Mechanisms for Medical Cyber Physical Systems.

    PubMed

    Kocabas, Ovunc; Soyata, Tolga; Aktas, Mehmet K

    2016-01-01

    The following decade will witness a surge in remote health-monitoring systems that are based on body-worn monitoring devices. These Medical Cyber Physical Systems (MCPS) will be capable of transmitting the acquired data to a private or public cloud for storage and processing. Machine learning algorithms running in the cloud and processing this data can provide decision support to healthcare professionals. There is no doubt that the security and privacy of the medical data is one of the most important concerns in designing an MCPS. In this paper, we depict the general architecture of an MCPS consisting of four layers: data acquisition, data aggregation, cloud processing, and action. Due to the differences in hardware and communication capabilities of each layer, different encryption schemes must be used to guarantee data privacy within that layer. We survey conventional and emerging encryption schemes based on their ability to provide secure storage, data sharing, and secure computation. Our detailed experimental evaluation of each scheme shows that while the emerging encryption schemes enable exciting new features such as secure sharing and secure computation, they introduce several orders-of-magnitude computational and storage overhead. We conclude our paper by outlining future research directions to improve the usability of the emerging encryption schemes in an MCPS.

  16. An efficient and secure dynamic ID-based authentication scheme for telecare medical information systems.

    PubMed

    Chen, Hung-Ming; Lo, Jung-Wen; Yeh, Chang-Kuo

    2012-12-01

    The rapidly increased availability of always-on broadband telecommunication environments and lower-cost vital signs monitoring devices bring the advantages of telemedicine directly into the patient's home. Hence, the control of access to remote medical servers' resources has become a crucial challenge. A secure authentication scheme between the medical server and remote users is therefore needed to safeguard data integrity, confidentiality and to ensure availability. Recently, many authentication schemes that use low-cost mobile devices have been proposed to meet these requirements. In contrast to previous schemes, Khan et al. proposed a dynamic ID-based remote user authentication scheme that reduces computational complexity and includes features such as a provision for the revocation of lost or stolen smart cards and a time expiry check for the authentication process. However, Khan et al.'s scheme has some security drawbacks. To remedy theses, this study proposes an enhanced authentication scheme that overcomes the weaknesses inherent in Khan et al.'s scheme and demonstrated this scheme is more secure and robust for use in a telecare medical information system.

  17. A unified inversion scheme to process multifrequency measurements of various dispersive electromagnetic properties

    NASA Astrophysics Data System (ADS)

    Han, Y.; Misra, S.

    2018-04-01

    Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.

  18. A fuzzy structural matching scheme for space robotics vision

    NASA Technical Reports Server (NTRS)

    Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka

    1994-01-01

    In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.

  19. Symbol Synchronization for Diffusion-Based Molecular Communications.

    PubMed

    Jamali, Vahid; Ahmadzadeh, Arman; Schober, Robert

    2017-12-01

    Symbol synchronization refers to the estimation of the start of a symbol interval and is needed for reliable detection. In this paper, we develop several symbol synchronization schemes for molecular communication (MC) systems where we consider some practical challenges, which have not been addressed in the literature yet. In particular, we take into account that in MC systems, the transmitter may not be equipped with an internal clock and may not be able to emit molecules with a fixed release frequency. Such restrictions hold for practical nanotransmitters, e.g., modified cells, where the lengths of the symbol intervals may vary due to the inherent randomness in the availability of food and energy for molecule generation, the process for molecule production, and the release process. To address this issue, we develop two synchronization-detection frameworks which both employ two types of molecule. In the first framework, one type of molecule is used for symbol synchronization and the other one is used for data detection, whereas in the second framework, both types of molecule are used for joint symbol synchronization and data detection. For both frameworks, we first derive the optimal maximum likelihood (ML) symbol synchronization schemes as performance upper bounds. Since ML synchronization entails high complexity, for each framework, we also propose three low-complexity suboptimal schemes, namely a linear filter-based scheme, a peak observation-based scheme, and a threshold-trigger scheme, which are suitable for MC systems with limited computational capabilities. Furthermore, we study the relative complexity and the constraints associated with the proposed schemes and the impact of the insertion and deletion errors that arise due to imperfect synchronization. Our simulation results reveal the effectiveness of the proposed synchronization schemes and suggest that the end-to-end performance of MC systems significantly depends on the accuracy of the symbol synchronization.

  20. New coherent laser communication detection scheme based on channel-switching method.

    PubMed

    Liu, Fuchuan; Sun, Jianfeng; Ma, Xiaoping; Hou, Peipei; Cai, Guangyu; Sun, Zhiwei; Lu, Zhiyong; Liu, Liren

    2015-04-01

    A new coherent laser communication detection scheme based on the channel-switching method is proposed. The detection front end of this scheme comprises a 90° optical hybrid and two balanced photodetectors which outputs the in-phase (I) channel and quadrature-phase (Q) channel signal current, respectively. With this method, the ultrahigh speed analog/digital transform of the signal of the I or Q channel is not required. The phase error between the signal and local lasers is obtained by simple analog circuit. Using the phase error signal, the signals of the I/Q channel are switched alternately. The principle of this detection scheme is presented. Moreover, the comparison of the sensitivity of this scheme with that of homodyne detection with an optical phase-locked loop is discussed. An experimental setup was constructed to verify the proposed detection scheme. The offline processing procedure and results are presented. This scheme could be realized through simple structure and has potential applications in cost-effective high-speed laser communication.

  1. Optical image transformation and encryption by phase-retrieval-based double random-phase encoding and compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo

    2018-01-01

    An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.

  2. Sparse Representations-Based Super-Resolution of Key-Frames Extracted from Frames-Sequences Generated by a Visual Sensor Network

    PubMed Central

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-01-01

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes. PMID:24566632

  3. Sparse representations-based super-resolution of key-frames extracted from frames-sequences generated by a visual sensor network.

    PubMed

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-02-21

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.

  4. Dynamic clustering scheme based on the coordination of management and control in multi-layer and multi-region intelligent optical network

    NASA Astrophysics Data System (ADS)

    Niu, Xiaoliang; Yuan, Fen; Huang, Shanguo; Guo, Bingli; Gu, Wanyi

    2011-12-01

    A Dynamic clustering scheme based on coordination of management and control is proposed to reduce network congestion rate and improve the blocking performance of hierarchical routing in Multi-layer and Multi-region intelligent optical network. Its implement relies on mobile agent (MA) technology, which has the advantages of efficiency, flexibility, functional and scalability. The paper's major contribution is to adjust dynamically domain when the performance of working network isn't in ideal status. And the incorporation of centralized NMS and distributed MA control technology migrate computing process to control plane node which releases the burden of NMS and improves process efficiently. Experiments are conducted on Multi-layer and multi-region Simulation Platform for Optical Network (MSPON) to assess the performance of the scheme.

  5. Vision communications based on LED array and imaging sensor

    NASA Astrophysics Data System (ADS)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  6. Impacts of Microphysical Scheme on Convective and Stratiform Characteristics in Two High Precipitation Squall Line Events

    NASA Technical Reports Server (NTRS)

    Wu, Di; Dong, Xiquan; Xi, Baike; Feng, Zhe; Kennedy, Aaron; Mullendore, Gretchen; Gilmore, Matthew; Tao, Wei-Kuo

    2013-01-01

    This study investigates the impact of snow, graupel, and hail processes on simulated squall lines over the Southern Great Plains in the United States. The Weather Research and Forecasting (WRF) model is used to simulate two squall line events in Oklahoma during May 2007, and the simulations are validated against radar and surface observations. Several microphysics schemes are tested in this study, including the WRF 5-Class Microphysics (WSM5), WRF 6-Class Microphysics (WSM6), Goddard Cumulus Ensemble (GCE) Three Ice (3-ice) with graupel, Goddard Two Ice (2-ice), and Goddard 3-ice hail schemes. Simulated surface precipitation is sensitive to the microphysics scheme when the graupel or hail categories are included. All of the 3-ice schemes overestimate the total precipitation with WSM6 having the largest bias. The 2-ice schemes, without a graupel/hail category, produce less total precipitation than the 3-ice schemes. By applying a radar-based convective/stratiform partitioning algorithm, we find that including graupel/hail processes increases the convective areal coverage, precipitation intensity, updraft, and downdraft intensities, and reduces the stratiform areal coverage and precipitation intensity. For vertical structures, simulations have higher reflectivity values distributed aloft than the observed values in both the convective and stratiform regions. Three-ice schemes produce more high reflectivity values in convective regions, while 2-ice schemes produce more high reflectivity values in stratiform regions. In addition, this study has demonstrated that the radar-based convective/stratiform partitioning algorithm can reasonably identify WRF-simulated precipitation, wind, and microphysical fields in both convective and stratiform regions.

  7. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    PubMed Central

    Zhang, Qianghui; Wu, Junjie; Li, Wenchao; Huang, Yulin; Yang, Jianyu; Yang, Haiguang

    2016-01-01

    Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR) equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS), which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR) provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP) is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD) based on Stolt interpolation. Finally, a modified TSP (MTSP) is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application. PMID:27472341

  8. A novel double loop control model design for chemical unstable processes.

    PubMed

    Cong, Er-Ding; Hu, Ming-Hui; Tu, Shan-Tung; Xuan, Fu-Zhen; Shao, Hui-He

    2014-03-01

    In this manuscript, based on Smith predictor control scheme for unstable process in industry, an improved double loop control model is proposed for chemical unstable processes. Inner loop is to stabilize integrating the unstable process and transform the original process to first-order plus pure dead-time dynamic stable process. Outer loop is to enhance the performance of set point response. Disturbance controller is designed to enhance the performance of disturbance response. The improved control system is simple with exact physical meaning. The characteristic equation is easy to realize stabilization. Three controllers are separately design in the improved scheme. It is easy to design each controller and good control performance for the respective closed-loop transfer function separately. The robust stability of the proposed control scheme is analyzed. Finally, case studies illustrate that the improved method can give better system performance than existing design methods. © 2013 ISA Published by ISA All rights reserved.

  9. Fast auto-focus scheme based on optical defocus fitting model

    NASA Astrophysics Data System (ADS)

    Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min

    2018-04-01

    An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.

  10. High-order asynchrony-tolerant finite difference schemes for partial differential equations

    NASA Astrophysics Data System (ADS)

    Aditya, Konduri; Donzis, Diego A.

    2017-12-01

    Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.

  11. Advanced Mobility Handover for Mobile IPv6 Based Wireless Networks

    PubMed Central

    Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an Advanced Mobility Handover scheme (AMH) in this paper for seamless mobility in MIPv6-based wireless networks. In the proposed scheme, the mobile node utilizes a unique home IPv6 address developed to maintain communication with other corresponding nodes without a care-of-address during the roaming process. The IPv6 address for each MN during the first round of AMH process is uniquely identified by HA using the developed MN-ID field as a global permanent, which is identifying uniquely the IPv6 address of MN. Moreover, a temporary MN-ID is generated by access point each time an MN is associated with a particular AP and temporarily saved in a developed table inside the AP. When employing the AMH scheme, the handover process in the network layer is performed prior to its default time. That is, the mobility handover process in the network layer is tackled by a trigger developed AMH message to the next access point. Thus, a mobile node keeps communicating with the current access point while the network layer handover is executed by the next access point. The mathematical analyses and simulation results show that the proposed scheme performs better as compared with the existing approaches. PMID:25614890

  12. An MBO Scheme for Minimizing the Graph Ohta-Kawasaki Functional

    NASA Astrophysics Data System (ADS)

    van Gennip, Yves

    2018-06-01

    We study a graph-based version of the Ohta-Kawasaki functional, which was originally introduced in a continuum setting to model pattern formation in diblock copolymer melts and has been studied extensively as a paradigmatic example of a variational model for pattern formation. Graph-based problems inspired by partial differential equations (PDEs) and variational methods have been the subject of many recent papers in the mathematical literature, because of their applications in areas such as image processing and data classification. This paper extends the area of PDE inspired graph-based problems to pattern-forming models, while continuing in the tradition of recent papers in the field. We introduce a mass conserving Merriman-Bence-Osher (MBO) scheme for minimizing the graph Ohta-Kawasaki functional with a mass constraint. We present three main results: (1) the Lyapunov functionals associated with this MBO scheme Γ -converge to the Ohta-Kawasaki functional (which includes the standard graph-based MBO scheme and total variation as a special case); (2) there is a class of graphs on which the Ohta-Kawasaki MBO scheme corresponds to a standard MBO scheme on a transformed graph and for which generalized comparison principles hold; (3) this MBO scheme allows for the numerical computation of (approximate) minimizers of the graph Ohta-Kawasaki functional with a mass constraint.

  13. Mapping Mangrove Density from Rapideye Data in Central America

    NASA Astrophysics Data System (ADS)

    Son, Nguyen-Thanh; Chen, Chi-Farn; Chen, Cheng-Ru

    2017-06-01

    Mangrove forests provide a wide range of socioeconomic and ecological services for coastal communities. Extensive aquaculture development of mangrove waters in many developing countries has constantly ignored services of mangrove ecosystems, leading to unintended environmental consequences. Monitoring the current status and distribution of mangrove forests is deemed important for evaluating forest management strategies. This study aims to delineate the density distribution of mangrove forests in the Gulf of Fonseca, Central America with Rapideye data using the support vector machines (SVM). The data collected in 2012 for density classification of mangrove forests were processed based on four different band combination schemes: scheme-1 (bands 1-3, 5 excluding the red-edge band 4), scheme-2 (bands 1-5), scheme-3 (bands 1-3, 5 incorporating with the normalized difference vegetation index, NDVI), and scheme-4 (bands 1-3, 5 incorporating with the normalized difference red-edge index, NDRI). We also hypothesized if the obvious contribution of Rapideye red-edge band could improve the classification results. Three main steps of data processing were employed: (1), data pre-processing, (2) image classification, and (3) accuracy assessment to evaluate the contribution of red-edge band in terms of the accuracy of classification results across these four schemes. The classification maps compared with the ground reference data indicated the slightly higher accuracy level observed for schemes 2 and 4. The overall accuracies and Kappa coefficients were 97% and 0.95 for scheme-2 and 96.9% and 0.95 for scheme-4, respectively.

  14. Application of Intel Many Integrated Core (MIC) accelerators to the Pleim-Xiu land surface scheme

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2015-10-01

    The land-surface model (LSM) is one physics process in the weather research and forecast (WRF) model. The LSM includes atmospheric information from the surface layer scheme, radiative forcing from the radiation scheme, and precipitation forcing from the microphysics and convective schemes, together with internal information on the land's state variables and land-surface properties. The LSM is to provide heat and moisture fluxes over land points and sea-ice points. The Pleim-Xiu (PX) scheme is one LSM. The PX LSM features three pathways for moisture fluxes: evapotranspiration, soil evaporation, and evaporation from wet canopies. To accelerate the computation process of this scheme, we employ Intel Xeon Phi Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.3x and 11.7x as compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670.

  15. Health Care Coverage Decision Making in Low- and Middle-Income Countries: Experiences from 25 Coverage Schemes.

    PubMed

    Gutierrez, Hialy; Shewade, Ashwini; Dai, Minghan; Mendoza-Arana, Pedro; Gómez-Dantés, Octavio; Jain, Nishant; Khonelidze, Irma; Nabyonga-Orem, Juliet; Saleh, Karima; Teerawattananon, Yot; Nishtar, Sania; Hornberger, John

    2015-08-01

    Lessons learned by countries that have successfully implemented coverage schemes for health services may be valuable for other countries, especially low- and middle-income countries (LMICs), which likewise are seeking to provide/expand coverage. The research team surveyed experts in population health management from LMICs for information on characteristics of health care coverage schemes and factors that influenced decision-making processes. The level of coverage provided by the different schemes varied. Nearly all the health care coverage schemes involved various representatives and stakeholders in their decision-making processes. Maternal and child health, cardiovascular diseases, cancer, and HIV were among the highest priorities guiding coverage development decisions. Evidence used to inform coverage decisions included medical literature, regional and global epidemiology, and coverage policies of other coverage schemes. Funding was the most commonly reported reason for restricting coverage. This exploratory study provides an overview of health care coverage schemes from participating LMICs and contributes to the scarce evidence base on coverage decision making. Sharing knowledge and experiences among LMICs can support efforts to establish systems for accessible, affordable, and equitable health care.

  16. Optimization of the scheme for natural ecology planning of urban rivers based on ANP (analytic network process) model.

    PubMed

    Zhang, Yichuan; Wang, Jiangping

    2015-07-01

    Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.

  17. An Efficient User Authentication and User Anonymity Scheme with Provably Security for IoT-Based Medical Care System.

    PubMed

    Li, Chun-Ta; Wu, Tsu-Yang; Chen, Chin-Ling; Lee, Cheng-Chi; Chen, Chien-Ming

    2017-06-23

    In recent years, with the increase in degenerative diseases and the aging population in advanced countries, demands for medical care of older or solitary people have increased continually in hospitals and healthcare institutions. Applying wireless sensor networks for the IoT-based telemedicine system enables doctors, caregivers or families to monitor patients' physiological conditions at anytime and anyplace according to the acquired information. However, transmitting physiological data through the Internet concerns the personal privacy of patients. Therefore, before users can access medical care services in IoT-based medical care system, they must be authenticated. Typically, user authentication and data encryption are most critical for securing network communications over a public channel between two or more participants. In 2016, Liu and Chung proposed a bilinear pairing-based password authentication scheme for wireless healthcare sensor networks. They claimed their authentication scheme cannot only secure sensor data transmission, but also resist various well-known security attacks. In this paper, we demonstrate that Liu-Chung's scheme has some security weaknesses, and we further present an improved secure authentication and data encryption scheme for the IoT-based medical care system, which can provide user anonymity and prevent the security threats of replay and password/sensed data disclosure attacks. Moreover, we modify the authentication process to reduce redundancy in protocol design, and the proposed scheme is more efficient in performance compared with previous related schemes. Finally, the proposed scheme is provably secure in the random oracle model under ECDHP.

  18. A Secure Dynamic Identity and Chaotic Maps Based User Authentication and Key Agreement Scheme for e-Healthcare Systems.

    PubMed

    Li, Chun-Ta; Lee, Cheng-Chi; Weng, Chi-Yao; Chen, Song-Jhih

    2016-11-01

    Secure user authentication schemes in many e-Healthcare applications try to prevent unauthorized users from intruding the e-Healthcare systems and a remote user and a medical server can establish session keys for securing the subsequent communications. However, many schemes does not mask the users' identity information while constructing a login session between two or more parties, even though personal privacy of users is a significant topic for e-Healthcare systems. In order to preserve personal privacy of users, dynamic identity based authentication schemes are hiding user's real identity during the process of network communications and only the medical server knows login user's identity. In addition, most of the existing dynamic identity based authentication schemes ignore the inputs verification during login condition and this flaw may subject to inefficiency in the case of incorrect inputs in the login phase. Regarding the use of secure authentication mechanisms for e-Healthcare systems, this paper presents a new dynamic identity and chaotic maps based authentication scheme and a secure data protection approach is employed in every session to prevent illegal intrusions. The proposed scheme can not only quickly detect incorrect inputs during the phases of login and password change but also can invalidate the future use of a lost/stolen smart card. Compared the functionality and efficiency with other authentication schemes recently, the proposed scheme satisfies desirable security attributes and maintains acceptable efficiency in terms of the computational overheads for e-Healthcare systems.

  19. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    PubMed

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  20. Optimizing School-Based Health-Promotion Programmes: Lessons from a Qualitative Study of Fluoridated Milk Schemes in the UK

    ERIC Educational Resources Information Center

    Foster, Geraldine R. K.; Tickle, Martin

    2013-01-01

    Background and objective: Some districts in the United Kingdom (UK), where the level of child dental caries is high and water fluoridation has not been possible, implement school-based fluoridated milk (FM) schemes. However, process variables, such as consent to drink FM and loss of children as they mature, impede the effectiveness of these…

  1. Integrated optical 3D digital imaging based on DSP scheme

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  2. Pot, politics and the press--reflections on cannabis law reform in Western Australia.

    PubMed

    Lenton, Simon

    2004-06-01

    Windows of opportunity for changing drug laws open infrequently and they often close without legislative change being affected. In this paper the author, who has been intimately involved in the process, describes how evidence-based recommendations to 'decriminalize' cannabis have recently been progressed through public debate and the political process to become law in Western Australia (WA). The Cannabis Control Bill 2003 passed the WA Parliament on 23 September. The Bill, the legislative backing behind the Cannabis Infringement Notice (CIN) Scheme, came into effect on 22 March 2004. This made WA the fourth Australian jurisdiction, after South Australia, the Australian Capital Territory and the Northern Territory, to adopt a prohibition with civil penalties scheme for minor cannabis offences. This paper describes some of the background to the scheme, the process by which it has become law, the main provisions of the scheme and its evaluation. It includes reflections on the role of politics and the press in the process. The process of implementation and evaluation are outlined by the author, foreshadowing an ongoing opportunity to understand the impact of the change in legislation.

  3. All-optical conversion scheme from binary to its MTN form with the help of nonlinear material based tree-net architecture

    NASA Astrophysics Data System (ADS)

    Maiti, Anup Kumar; Nath Roy, Jitendra; Mukhopadhyay, Sourangshu

    2007-08-01

    In the field of optical computing and parallel information processing, several number systems have been used for different arithmetic and algebraic operations. Therefore an efficient conversion scheme from one number system to another is very important. Modified trinary number (MTN) has already taken a significant role towards carry and borrow free arithmetic operations. In this communication, we propose a tree-net architecture based all optical conversion scheme from binary number to its MTN form. Optical switch using nonlinear material (NLM) plays an important role.

  4. In-TFT-array-process micro defect inspection using nonlinear principal component analysis.

    PubMed

    Liu, Yi-Hung; Wang, Chi-Kai; Ting, Yung; Lin, Wei-Zhi; Kang, Zhi-Hao; Chen, Ching-Shun; Hwang, Jih-Shang

    2009-11-20

    Defect inspection plays a critical role in thin film transistor liquid crystal display (TFT-LCD) manufacture, and has received much attention in the field of automatic optical inspection (AOI). Previously, most focus was put on the problems of macro-scale Mura-defect detection in cell process, but it has recently been found that the defects which substantially influence the yield rate of LCD panels are actually those in the TFT array process, which is the first process in TFT-LCD manufacturing. Defect inspection in TFT array process is therefore considered a difficult task. This paper presents a novel inspection scheme based on kernel principal component analysis (KPCA) algorithm, which is a nonlinear version of the well-known PCA algorithm. The inspection scheme can not only detect the defects from the images captured from the surface of LCD panels, but also recognize the types of the detected defects automatically. Results, based on real images provided by a LCD manufacturer in Taiwan, indicate that the KPCA-based defect inspection scheme is able to achieve a defect detection rate of over 99% and a high defect classification rate of over 96% when the imbalanced support vector machine (ISVM) with 2-norm soft margin is employed as the classifier. More importantly, the inspection time is less than 1 s per input image.

  5. A method of LED free-form tilted lens rapid modeling based on scheme language

    NASA Astrophysics Data System (ADS)

    Dai, Yidan

    2017-10-01

    According to nonimaging optical principle and traditional LED free-form surface lens, a new kind of LED free-form tilted lens was designed. And a method of rapid modeling based on Scheme language was proposed. The mesh division method was applied to obtain the corresponding surface configuration according to the character of the light source and the desired energy distribution on the illumination plane. Then 3D modeling software and the Scheme language programming are used to generate lens model respectively. With the help of optical simulation software, a light source with the size of 1mm*1mm*1mm in volume is used in experiment, and the lateral migration distance of illumination area is 0.5m, in which total one million rays are computed. We could acquire the simulated results of both models. The simulated output result shows that the Scheme language can prevent the model deformation problems caused by the process of the model transfer, and the degree of illumination uniformity is reached to 82%, and the offset angle is 26°. Also, the efficiency of modeling process is greatly increased by using Scheme language.

  6. Symmetric quantum fully homomorphic encryption with perfect security

    NASA Astrophysics Data System (ADS)

    Liang, Min

    2013-12-01

    Suppose some data have been encrypted, can you compute with the data without decrypting them? This problem has been studied as homomorphic encryption and blind computing. We consider this problem in the context of quantum information processing, and present the definitions of quantum homomorphic encryption (QHE) and quantum fully homomorphic encryption (QFHE). Then, based on quantum one-time pad (QOTP), we construct a symmetric QFHE scheme, where the evaluate algorithm depends on the secret key. This scheme permits any unitary transformation on any -qubit state that has been encrypted. Compared with classical homomorphic encryption, the QFHE scheme has perfect security. Finally, we also construct a QOTP-based symmetric QHE scheme, where the evaluate algorithm is independent of the secret key.

  7. Optimal reconstruction of the states in qutrit systems

    NASA Astrophysics Data System (ADS)

    Yan, Fei; Yang, Ming; Cao, Zhuo-Liang

    2010-10-01

    Based on mutually unbiased measurements, an optimal tomographic scheme for the multiqutrit states is presented explicitly. Because the reconstruction process of states based on mutually unbiased states is free of information waste, we refer to our scheme as the optimal scheme. By optimal we mean that the number of the required conditional operations reaches the minimum in this tomographic scheme for the states of qutrit systems. Special attention will be paid to how those different mutually unbiased measurements are realized; that is, how to decompose each transformation that connects each mutually unbiased basis with the standard computational basis. It is found that all those transformations can be decomposed into several basic implementable single- and two-qutrit unitary operations. For the three-qutrit system, there exist five different mutually unbiased-bases structures with different entanglement properties, so we introduce the concept of physical complexity to minimize the number of nonlocal operations needed over the five different structures. This scheme is helpful for experimental scientists to realize the most economical reconstruction of quantum states in qutrit systems.

  8. FPGA design of correlation-based pattern recognition

    NASA Astrophysics Data System (ADS)

    Jridi, Maher; Alfalou, Ayman

    2017-05-01

    Optical/Digital pattern recognition and tracking based on optical/digital correlation are a well-known techniques to detect, identify and localize a target object in a scene. Despite the limited number of treatments required by the correlation scheme, computational time and resources are relatively high. The most computational intensive treatment required by the correlation is the transformation from spatial to spectral domain and then from spectral to spatial domain. Furthermore, these transformations are used on optical/digital encryption schemes like the double random phase encryption (DRPE). In this paper, we present a VLSI architecture for the correlation scheme based on the fast Fourier transform (FFT). One interesting feature of the proposed scheme is its ability to stream image processing in order to perform correlation for video sequences. A trade-off between the hardware consumption and the robustness of the correlation can be made in order to understand the limitations of the correlation implementation in reconfigurable and portable platforms. Experimental results obtained from HDL simulations and FPGA prototype have demonstrated the advantages of the proposed scheme.

  9. MRI-based treatment planning with pseudo CT generated through atlas registration.

    PubMed

    Uh, Jinsoo; Merchant, Thomas E; Li, Yimei; Li, Xingyu; Hua, Chiaho

    2014-05-01

    To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787-0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%-98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.

  10. MRI-based treatment planning with pseudo CT generated through atlas registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uh, Jinsoo, E-mail: jinsoo.uh@stjude.org; Merchant, Thomas E.; Hua, Chiaho

    2014-05-15

    Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration ofmore » conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.« less

  11. MRI-based treatment planning with pseudo CT generated through atlas registration

    PubMed Central

    Uh, Jinsoo; Merchant, Thomas E.; Li, Yimei; Li, Xingyu; Hua, Chiaho

    2014-01-01

    Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs. PMID:24784377

  12. Predictability of process resource usage - A measurement-based study on UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1989-01-01

    A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient betweeen the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82 percent of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  13. Predictability of process resource usage: A measurement-based study of UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1987-01-01

    A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  14. Fish tracking by combining motion based segmentation and particle filtering

    NASA Astrophysics Data System (ADS)

    Bichot, E.; Mascarilla, L.; Courtellemont, P.

    2006-01-01

    In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.

  15. An Automatic Detection System of Lung Nodule Based on Multi-Group Patch-Based Deep Learning Network.

    PubMed

    Jiang, Hongyang; Ma, He; Qian, Wei; Gao, Mengdi; Li, Yan

    2017-07-14

    High-efficiency lung nodule detection dramatically contributes to the risk assessment of lung cancer. It is a significant and challenging task to quickly locate the exact positions of lung nodules. Extensive work has been done by researchers around this domain for approximately two decades. However, previous computer aided detection (CADe) schemes are mostly intricate and time-consuming since they may require more image processing modules, such as the computed tomography (CT) image transformation, the lung nodule segmentation and the feature extraction, to construct a whole CADe system. It is difficult for those schemes to process and analyze enormous data when the medical images continue to increase. Besides, some state of the art deep learning schemes may be strict in the standard of database. This study proposes an effective lung nodule detection scheme based on multi-group patches cut out from the lung images, which are enhanced by the Frangi filter. Through combining two groups of images, a four-channel convolution neural networks (CNN) model is designed to learn the knowledge of radiologists for detecting nodules of four levels. This CADe scheme can acquire the sensitivity of 80.06% with 4.7 false positives per scan and the sensitivity of 94% with 15.1 false positives per scan. The results demonstrate that the multi-group patch-based learning system is efficient to improve the performance of lung nodule detection and greatly reduce the false positives under a huge amount of image data.

  16. The implementation of reverse Kessler warm rain scheme for radar reflectivity assimilation using a nudging approach in New Zealand

    NASA Astrophysics Data System (ADS)

    Zhang, Sijin; Austin, Geoff; Sutherland-Stacey, Luke

    2014-05-01

    Reverse Kessler warm rain processes were implemented within the Weather Research and Forecasting Model (WRF) and coupled with a Newtonian relaxation, or nudging technique designed to improve quantitative precipitation forecasting (QPF) in New Zealand by making use of observed radar reflectivity and modest computing facilities. One of the reasons for developing such a scheme, rather than using 4D-Var for example, is that radar VAR scheme in general, and 4D-Var in particular, requires computational resources beyond the capability of most university groups and indeed some national forecasting centres of small countries like New Zealand. The new scheme adjusts the model water vapor mixing ratio profiles based on observed reflectivity at each time step within an assimilation time window. The whole scheme can be divided into following steps: (i) The radar reflectivity is firstly converted to rain water, and (ii) then the rain water is used to derive cloud water content according to the reverse Kessler scheme; (iii) The cloud water content associated water vapor mixing ratio is then calculated based on the saturation adjustment processes; (iv) Finally the adjusted water vapor is nudged into the model and the model background is updated. 13 rainfall cases which occurred in the summer of 2011/2012 in New Zealand were used to evaluate the new scheme, different forecast scores were calculated and showed that the new scheme was able to improve precipitation forecasts on average up to around 7 hours ahead depending on different verification thresholds.

  17. Genetic and economic evaluation of Japanese Black (Wagyu) cattle breeding schemes.

    PubMed

    Kahi, A K; Hirooka, H

    2005-09-01

    Deterministic simulation was used to evaluate 10 breeding schemes for genetic gain and profitability and in the context of maximizing returns from investment in Japanese Black cattle breeding. A breeding objective that integrated the cow-calf and feedlot segments was considered. Ten breeding schemes that differed in the records available for use as selection criteria were defined. The schemes ranged from one that used carcass traits currently available to Japanese Black cattle breeders (Scheme 1) to one that also included linear measurements and male and female reproduction traits (Scheme 10). The latter scheme represented the highest level of performance recording. In all breeding schemes, sires were chosen from the proportion selected during the first selection stage (performance testing), modeling a two-stage selection process. The effect on genetic gain and profitability of varying test capacity and number of progeny per sire and of ultrasound scanning of live animals was examined for all breeding schemes. Breeding schemes that selected young bulls during performance testing based on additional individual traits and information on carcass traits from their relatives generated additional genetic gain and profitability. Increasing test capacity resulted in an increase in genetic gain in all schemes. Profitability was optimal in Scheme 2 (a scheme similar to Scheme 1, but selection of young bulls also was based on information on carcass traits from their relatives) to 10 when 900 to 1,000 places were available for performance testing. Similarly, as the number of progeny used in the selection of sires increased, genetic gain first increased sharply and then gradually in all schemes. Profit was optimal across all breeding schemes when sires were selected based on information from 150 to 200 progeny. Additional genetic gain and profitability were generated in each breeding scheme with ultrasound scanning of live animals for carcass traits. Ultrasound scanning of live animals was more important than the addition of any other traits in the selection criteria. These results may be used to provide guidance to Japanese Black cattle breeders.

  18. Secret information reconciliation based on punctured low-density parity-check codes for continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua

    2017-02-01

    Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.

  19. Health worker preferences for performance-based payment schemes in a rural health district in Burkina Faso.

    PubMed

    Yé, Maurice; Diboulo, Eric; Kagoné, Moubassira; Sié, Ali; Sauerborn, Rainer; Loukanova, Svetla

    2016-01-01

    One promising way to improve the motivation of healthcare providers and the quality of healthcare services is performance-based incentives (PBIs) also referred as performance-based financing. Our study aims to explore healthcare providers' preferences for an incentive scheme based on local resources, which aimed at improving the quality of maternal and child health care in the Nouna Health District. A qualitative and quantitative survey was carried out in 2010 involving 94 healthcare providers within 34 health facilities. In addition, in-depth interviews involving a total of 33 key informants were conducted at health facility levels. Overall, 85% of health workers were in favour of an incentive scheme based on the health district's own financial resources (95% CI: [71.91; 88.08]). Most health workers (95 and 96%) expressed a preference for financial incentives (95% CI: [66.64; 85.36]) and team-based incentives (95% CI: [67.78; 86.22]), respectively. The suggested performance indicators were those linked to antenatal care services, prevention of mother-to-child human immunodeficiency virus transmission, neonatal care, and immunization. The early involvement of health workers and other stakeholders in designing an incentive scheme proved to be valuable. It ensured their effective participation in the process and overall acceptance of the scheme at the end. This study is an important contribution towards the designing of effective PBI schemes.

  20. Health worker preferences for performance-based payment schemes in a rural health district in Burkina Faso

    PubMed Central

    Yé, Maurice; Diboulo, Eric; Kagoné, Moubassira; Sié, Ali; Sauerborn, Rainer; Loukanova, Svetla

    2016-01-01

    Background One promising way to improve the motivation of healthcare providers and the quality of healthcare services is performance-based incentives (PBIs) also referred as performance-based financing. Our study aims to explore healthcare providers’ preferences for an incentive scheme based on local resources, which aimed at improving the quality of maternal and child health care in the Nouna Health District. Design A qualitative and quantitative survey was carried out in 2010 involving 94 healthcare providers within 34 health facilities. In addition, in-depth interviews involving a total of 33 key informants were conducted at health facility levels. Results Overall, 85% of health workers were in favour of an incentive scheme based on the health district's own financial resources (95% CI: [71.91; 88.08]). Most health workers (95 and 96%) expressed a preference for financial incentives (95% CI: [66.64; 85.36]) and team-based incentives (95% CI: [67.78; 86.22]), respectively. The suggested performance indicators were those linked to antenatal care services, prevention of mother-to-child human immunodeficiency virus transmission, neonatal care, and immunization. Conclusions The early involvement of health workers and other stakeholders in designing an incentive scheme proved to be valuable. It ensured their effective participation in the process and overall acceptance of the scheme at the end. This study is an important contribution towards the designing of effective PBI schemes. PMID:26739784

  1. Social shaping of food intervention initiatives at worksites: canteen takeaway schemes at two Danish hospitals.

    PubMed

    Poulsen, Signe; Jørgensen, Michael Søgaard

    2011-09-01

    The aim of this article is to analyse the social shaping of worksite food interventions at two Danish worksites. The overall aims are to contribute first, to the theoretical frameworks for the planning and analysis of food and health interventions at worksites and second, to a foodscape approach to worksite food interventions. The article is based on a case study of the design of a canteen takeaway (CTA) scheme for employees at two Danish hospitals. This was carried out as part of a project to investigate the shaping and impact of schemes that offer employees meals to buy, to take home or to eat at the worksite during irregular working hours. Data collection was carried out through semi-structured interviews with stakeholders within the two change processes. Two focus group interviews were also carried out at one hospital and results from a user survey carried out by other researchers at the other hospital were included. Theoretically, the study was based on the social constitution approach to change processes at worksites and a co-evolution approach to problem-solution complexes as part of change processes. Both interventions were initiated because of the need to improve the food supply for the evening shift and the work-life balance. The shaping of the schemes at the two hospitals became rather different change processes due to the local organizational processes shaped by previously developed norms and values. At one hospital the change process challenged norms and values about food culture and challenged ideas in the canteen kitchen about working hours. At the other hospital, the change was more of a learning process that aimed at finding the best way to offer a CTA scheme. Worksite health promotion practitioners should be aware that the intervention itself is an object of negotiation between different stakeholders at a worksite based on existing norms and values. The social contextual model and the setting approach to worksite health interventions lack reflections about how such norms and values might influence the shaping of the intervention. It is recommended that future planning and analyses of worksite health promotion interventions apply a combination of the social constitution approach to worksites and an integrated food supply and demand perspective based on analyses of the co-evolution of problem-solution complexes.

  2. An Efficient User Authentication and User Anonymity Scheme with Provably Security for IoT-Based Medical Care System

    PubMed Central

    Wu, Tsu-Yang; Chen, Chin-Ling; Lee, Cheng-Chi; Chen, Chien-Ming

    2017-01-01

    In recent years, with the increase in degenerative diseases and the aging population in advanced countries, demands for medical care of older or solitary people have increased continually in hospitals and healthcare institutions. Applying wireless sensor networks for the IoT-based telemedicine system enables doctors, caregivers or families to monitor patients’ physiological conditions at anytime and anyplace according to the acquired information. However, transmitting physiological data through the Internet concerns the personal privacy of patients. Therefore, before users can access medical care services in IoT-based medical care system, they must be authenticated. Typically, user authentication and data encryption are most critical for securing network communications over a public channel between two or more participants. In 2016, Liu and Chung proposed a bilinear pairing-based password authentication scheme for wireless healthcare sensor networks. They claimed their authentication scheme cannot only secure sensor data transmission, but also resist various well-known security attacks. In this paper, we demonstrate that Liu–Chung’s scheme has some security weaknesses, and we further present an improved secure authentication and data encryption scheme for the IoT-based medical care system, which can provide user anonymity and prevent the security threats of replay and password/sensed data disclosure attacks. Moreover, we modify the authentication process to reduce redundancy in protocol design, and the proposed scheme is more efficient in performance compared with previous related schemes. Finally, the proposed scheme is provably secure in the random oracle model under ECDHP. PMID:28644381

  3. Chaos-based partial image encryption scheme based on linear fractional and lifting wavelet transforms

    NASA Astrophysics Data System (ADS)

    Belazi, Akram; Abd El-Latif, Ahmed A.; Diaconu, Adrian-Viorel; Rhouma, Rhouma; Belghith, Safya

    2017-01-01

    In this paper, a new chaos-based partial image encryption scheme based on Substitution-boxes (S-box) constructed by chaotic system and Linear Fractional Transform (LFT) is proposed. It encrypts only the requisite parts of the sensitive information in Lifting-Wavelet Transform (LWT) frequency domain based on hybrid of chaotic maps and a new S-box. In the proposed encryption scheme, the characteristics of confusion and diffusion are accomplished in three phases: block permutation, substitution, and diffusion. Then, we used dynamic keys instead of fixed keys used in other approaches, to control the encryption process and make any attack impossible. The new S-box was constructed by mixing of chaotic map and LFT to insure the high confidentiality in the inner encryption of the proposed approach. In addition, the hybrid compound of S-box and chaotic systems strengthened the whole encryption performance and enlarged the key space required to resist the brute force attacks. Extensive experiments were conducted to evaluate the security and efficiency of the proposed approach. In comparison with previous schemes, the proposed cryptosystem scheme showed high performances and great potential for prominent prevalence in cryptographic applications.

  4. Proposing a new iterative learning control algorithm based on a non-linear least square formulation - Minimising draw-in errors

    NASA Astrophysics Data System (ADS)

    Endelt, B.

    2017-09-01

    Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.

  5. Streaming support for data intensive cloud-based sequence analysis.

    PubMed

    Issa, Shadi A; Kienzler, Romeo; El-Kalioby, Mohamed; Tonellato, Peter J; Wall, Dennis; Bruggmann, Rémy; Abouelhoda, Mohamed

    2013-01-01

    Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of "resources-on-demand" and "pay-as-you-go", scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client's site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation.

  6. Adaptive angular-velocity Vold-Kalman filter order tracking - Theoretical basis, numerical implementation and parameter investigation

    NASA Astrophysics Data System (ADS)

    Pan, M.-Ch.; Chu, W.-Ch.; Le, Duc-Do

    2016-12-01

    The paper presents an alternative Vold-Kalman filter order tracking (VKF_OT) method, i.e. adaptive angular-velocity VKF_OT technique, to extract and characterize order components in an adaptive manner for the condition monitoring and fault diagnosis of rotary machinery. The order/spectral waveforms to be tracked can be recursively solved by using Kalman filter based on the one-step state prediction. The paper comprises theoretical derivation of computation scheme, numerical implementation, and parameter investigation. Comparisons of the adaptive VKF_OT scheme with two other ones are performed through processing synthetic signals of designated order components. Processing parameters such as the weighting factor and the correlation matrix of process noise, and data conditions like the sampling frequency, which influence tracking behavior, are explored. The merits such as adaptive processing nature and computation efficiency brought by the proposed scheme are addressed although the computation was performed in off-line conditions. The proposed scheme can simultaneously extract multiple spectral components, and effectively decouple close and crossing orders associated with multi-axial reference rotating speeds.

  7. Evaluation of hardware costs of implementing PSK signal detection circuit based on "system on chip"

    NASA Astrophysics Data System (ADS)

    Sokolovskiy, A. V.; Dmitriev, D. D.; Veisov, E. A.; Gladyshev, A. B.

    2018-05-01

    The article deals with the choice of the architecture of digital signal processing units for implementing the PSK signal detection scheme. As an assessment of the effectiveness of architectures, the required number of shift registers and computational processes are used when implementing the "system on a chip" on the chip. A statistical estimation of the normalized code sequence offset in the signal synchronization scheme for various hardware block architectures is used.

  8. Laser control of reactions of photoswitching functional molecules.

    PubMed

    Tamura, Hiroyuki; Nanbu, Shinkoh; Ishida, Toshimasa; Nakamura, Hiroki

    2006-07-21

    Laser control schemes of reactions of photoswitching functional molecules are proposed based on the quantum mechanical wave-packet dynamics and the design of laser parameters. The appropriately designed quadratically chirped laser pulses can achieve nearly complete transitions of wave packet among electronic states. The laser parameters can be optimized by using the Zhu-Nakamura theory of nonadiabatic transition. This method is effective not only for the initial photoexcitation process but also for the pump and dump scheme in the middle of the overall photoswitching process. The effects of momentum of the wave packet crossing a conical intersection on the branching ratio of products have also been clarified. These control schemes mentioned above are successfully applied to the cyclohexadiene/hexatriene photoisomerization (ring-opening) process which is the reaction center of practical photoswitching molecules such as diarylethenes. The overall efficiency of the ring opening can be appreciably increased by using the appropriately designed laser pulses compared to that of the natural photoisomerization without any control schemes.

  9. New methods of multimode fiber interferometer signal processing

    NASA Astrophysics Data System (ADS)

    Vitrik, Oleg B.; Kulchin, Yuri N.; Maxaev, Oleg G.; Kirichenko, Oleg V.; Kamenev, Oleg T.; Petrov, Yuri S.

    1995-06-01

    New methods of multimode fiber interferometers signal processing are suggested. For scheme of single fiber multimode interferometers with two excited modes, the method based on using of special fiber unit is developed. This unit provides the modes interaction and further sum optical field filtering. As a result the amplitude of output signal is modulated by external influence on interferometer. The stabilization of interferometer sensitivity is achieved by using additional special modulation of output signal. For scheme of single fiber multimode interferometers with excitation of wide mode spectrum, the signal of intermode interference is registered by photodiode matrix and then special electronic unit performs correlation processing. For elimination of temperature destabilization, the registered signal is adopted to multimode interferometers optical signal temperature changes. The achieved parameters for double mode scheme: temporary stability--0.6% per hour, sensitivity to interferometer length deviations--3,2 nm; for multimode scheme: temperature stability--(0.5%)/(K), temporary nonstability--0.2% per hour, sensitivity to interferometer length deviations--20 nm, dynamic range--35 dB.

  10. Bias correction of daily satellite precipitation data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.

    2018-05-01

    Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.

  11. Hospital financing: calculating inpatient capital costs in Germany with a comparative view on operating costs and the English costing scheme.

    PubMed

    Vogl, Matthias

    2014-04-01

    The paper analyzes the German inpatient capital costing scheme by assessing its cost module calculation. The costing scheme represents the first separated national calculation of performance-oriented capital cost lump sums per DRG. The three steps in the costing scheme are reviewed and assessed: (1) accrual of capital costs; (2) cost-center and cost category accounting; (3) data processing for capital cost modules. The assessment of each step is based on its level of transparency and efficiency. A comparative view on operating costing and the English costing scheme is given. Advantages of the scheme are low participation hurdles, low calculation effort for G-DRG calculation participants, highly differentiated cost-center/cost category separation, and advanced patient-based resource allocation. The exclusion of relevant capital costs, nontransparent resource allocation, and unclear capital cost modules, limit the managerial relevance and transparency of the capital costing scheme. The scheme generates the technical premises for a change from dual financing by insurances (operating costs) and state (capital costs) to a single financing source. The new capital costing scheme will intensify the discussion on how to solve the current investment backlog in Germany and can assist regulators in other countries with the introduction of accurate capital costing. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. File Usage Analysis and Resource Usage Prediction: a Measurement-Based Study. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.-S.

    1987-01-01

    A probabilistic scheme was developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The coefficient of correlation between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  13. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  14. Mining and beneficiation of lunar ores

    NASA Technical Reports Server (NTRS)

    Bunch, T. E.; Williams, R. J.; Mckay, D. S.; Giles, D.

    1979-01-01

    The beneficiation of lunar plagioclase and ilmenite ores to feedstock grade permits a rapid growth of the space manufacturing economy by maximizing the production rate of metals and oxygen. A beneficiation scheme based on electrostatic and magnetic separation is preferred over conventional schemes, but such a scheme cannot be completely modeled because beneficiation processes are empirical and because some properties of lunar minerals have not been measured. To meet anticipated shipping and processing needs, the peak lunar mining rate will exceed 1000 tons/hr by the fifth year of operation. Such capabilities will be best obtained by automated mining vehicles and conveyor systems rather than trucks. It may be possible to extract about 40 kg of volatiles (60 percent H2O) by thermally processing the less than 20 micron ilmenite concentrate extracted from 130 tons of ilmenite ore. A thermodynamic analysis of an extraction process is presented.

  15. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    NASA Astrophysics Data System (ADS)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-08-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  16. A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.

    PubMed

    Han, Renmin; Wang, Liansan; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2015-12-01

    Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. In-TFT-Array-Process Micro Defect Inspection Using Nonlinear Principal Component Analysis

    PubMed Central

    Liu, Yi-Hung; Wang, Chi-Kai; Ting, Yung; Lin, Wei-Zhi; Kang, Zhi-Hao; Chen, Ching-Shun; Hwang, Jih-Shang

    2009-01-01

    Defect inspection plays a critical role in thin film transistor liquid crystal display (TFT-LCD) manufacture, and has received much attention in the field of automatic optical inspection (AOI). Previously, most focus was put on the problems of macro-scale Mura-defect detection in cell process, but it has recently been found that the defects which substantially influence the yield rate of LCD panels are actually those in the TFT array process, which is the first process in TFT-LCD manufacturing. Defect inspection in TFT array process is therefore considered a difficult task. This paper presents a novel inspection scheme based on kernel principal component analysis (KPCA) algorithm, which is a nonlinear version of the well-known PCA algorithm. The inspection scheme can not only detect the defects from the images captured from the surface of LCD panels, but also recognize the types of the detected defects automatically. Results, based on real images provided by a LCD manufacturer in Taiwan, indicate that the KPCA-based defect inspection scheme is able to achieve a defect detection rate of over 99% and a high defect classification rate of over 96% when the imbalanced support vector machine (ISVM) with 2-norm soft margin is employed as the classifier. More importantly, the inspection time is less than 1 s per input image. PMID:20057957

  18. Universal Quantum Computing with Measurement-Induced Continuous-Variable Gate Sequence in a Loop-Based Architecture.

    PubMed

    Takeda, Shuntaro; Furusawa, Akira

    2017-09-22

    We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.

  19. Universal Quantum Computing with Measurement-Induced Continuous-Variable Gate Sequence in a Loop-Based Architecture

    NASA Astrophysics Data System (ADS)

    Takeda, Shuntaro; Furusawa, Akira

    2017-09-01

    We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.

  20. Continuous recovery of valine in a model mixture of amino acids and salt from Corynebacterium bacteria fermentation using a simulated moving bed chromatography.

    PubMed

    Park, Chanhun; Nam, Hee-Geun; Jo, Se-Hee; Wang, Nien-Hwa Linda; Mun, Sungyong

    2016-02-26

    The economical efficiency of valine production in related industries is largely affected by the performance of a valine separation process, in which valine is to be separated from leucine, alanine, and ammonium sulfate. Such separation is currently handled by a batch-mode hybrid process based on ion-exchange and crystallization schemes. To make a substantial improvement in the economical efficiency of an industrial valine production, such a batch-mode process based on two different separation schemes needs to be converted into a continuous-mode separation process based on a single separation scheme. To address this issue, a simulated moving bed (SMB) technology was applied in this study to the development of a continuous-mode valine-separation chromatographic process with uniformity in adsorbent and liquid phases. It was first found that a Chromalite-PCG600C resin could be eligible for the adsorbent of such process, particularly in an industrial scale. The intrinsic parameters of each component on the Chromalite-PCG600C adsorbent were determined and then utilized in selecting a proper set of configurations for SMB units, columns, and ports, under which the SMB operating parameters were optimized with a genetic algorithm. Finally, the optimized SMB based on the selected configurations was tested experimentally, which confirmed its effectiveness in continuous separation of valine from leucine, alanine, ammonium sulfate with high purity, high yield, high throughput, and high valine product concentration. It is thus expected that the developed SMB process in this study will be able to serve as one of the trustworthy ways of improving the economical efficiency of an industrial valine production process. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. An Anonymous User Authentication and Key Agreement Scheme Based on a Symmetric Cryptosystem in Wireless Sensor Networks.

    PubMed

    Jung, Jaewook; Kim, Jiye; Choi, Younsung; Won, Dongho

    2016-08-16

    In wireless sensor networks (WSNs), a registered user can login to the network and use a user authentication protocol to access data collected from the sensor nodes. Since WSNs are typically deployed in unattended environments and sensor nodes have limited resources, many researchers have made considerable efforts to design a secure and efficient user authentication process. Recently, Chen et al. proposed a secure user authentication scheme using symmetric key techniques for WSNs. They claim that their scheme assures high efficiency and security against different types of attacks. After careful analysis, however, we find that Chen et al.'s scheme is still vulnerable to smart card loss attack and is susceptible to denial of service attack, since it is invalid for verification to simply compare an entered ID and a stored ID in smart card. In addition, we also observe that their scheme cannot preserve user anonymity. Furthermore, their scheme cannot quickly detect an incorrect password during login phase, and this flaw wastes both communication and computational overheads. In this paper, we describe how these attacks work, and propose an enhanced anonymous user authentication and key agreement scheme based on a symmetric cryptosystem in WSNs to address all of the aforementioned vulnerabilities in Chen et al.'s scheme. Our analysis shows that the proposed scheme improves the level of security, and is also more efficient relative to other related schemes.

  2. MeMoVolc report on classification and dynamics of volcanic explosive eruptions

    NASA Astrophysics Data System (ADS)

    Bonadonna, C.; Cioni, R.; Costa, A.; Druitt, T.; Phillips, J.; Pioli, L.; Andronico, D.; Harris, A.; Scollo, S.; Bachmann, O.; Bagheri, G.; Biass, S.; Brogi, F.; Cashman, K.; Dominguez, L.; Dürig, T.; Galland, O.; Giordano, G.; Gudmundsson, M.; Hort, M.; Höskuldsson, A.; Houghton, B.; Komorowski, J. C.; Küppers, U.; Lacanna, G.; Le Pennec, J. L.; Macedonio, G.; Manga, M.; Manzella, I.; Vitturi, M. de'Michieli; Neri, A.; Pistolesi, M.; Polacci, M.; Ripepe, M.; Rossi, E.; Scheu, B.; Sulpizio, R.; Tripoli, B.; Valade, S.; Valentine, G.; Vidal, C.; Wallenstein, N.

    2016-11-01

    Classifications of volcanic eruptions were first introduced in the early twentieth century mostly based on qualitative observations of eruptive activity, and over time, they have gradually been developed to incorporate more quantitative descriptions of the eruptive products from both deposits and observations of active volcanoes. Progress in physical volcanology, and increased capability in monitoring, measuring and modelling of explosive eruptions, has highlighted shortcomings in the way we classify eruptions and triggered a debate around the need for eruption classification and the advantages and disadvantages of existing classification schemes. Here, we (i) review and assess existing classification schemes, focussing on subaerial eruptions; (ii) summarize the fundamental processes that drive and parameters that characterize explosive volcanism; (iii) identify and prioritize the main research that will improve the understanding, characterization and classification of volcanic eruptions and (iv) provide a roadmap for producing a rational and comprehensive classification scheme. In particular, classification schemes need to be objective-driven and simple enough to permit scientific exchange and promote transfer of knowledge beyond the scientific community. Schemes should be comprehensive and encompass a variety of products, eruptive styles and processes, including for example, lava flows, pyroclastic density currents, gas emissions and cinder cone or caldera formation. Open questions, processes and parameters that need to be addressed and better characterized in order to develop more comprehensive classification schemes and to advance our understanding of volcanic eruptions include conduit processes and dynamics, abrupt transitions in eruption regime, unsteadiness, eruption energy and energy balance.

  3. Benefits of a 4th Ice Class in the Simulated Radar Reflectivities of Convective Systems Using a Bulk Microphysics Scheme

    NASA Technical Reports Server (NTRS)

    Lang, Stephen E.; Tao, Wei-Kuo; Chern, Jiun-Dar; Wu, Di; Li, Xiaowen

    2015-01-01

    Numerous cloud microphysical schemes designed for cloud and mesoscale models are currently in use, ranging from simple bulk to multi-moment, multi-class to explicit bin schemes. This study details the benefits of adding a 4th ice class (hail) to an already improved 3-class ice bulk microphysics scheme developed for the Goddard Cumulus Ensemble model based on Rutledge and Hobbs (1983,1984). Besides the addition and modification of several hail processes from Lin et al. (1983), further modifications were made to the 3-ice processes, including allowing greater ice super saturation and mitigating spurious evaporationsublimation in the saturation adjustment scheme, allowing graupelhail to become snow via vapor growth and hail to become graupel via riming, and the inclusion of a rain evaporation correction and vapor diffusivity factor. The improved 3-ice snowgraupel size-mapping schemes were adjusted to be more stable at higher mixing rations and to increase the aggregation effect for snow. A snow density mapping was also added. The new scheme was applied to an intense continental squall line and a weaker, loosely-organized continental case using three different hail intercepts. Peak simulated reflectivities agree well with radar for both the intense and weaker case and were better than earlier 3-ice versions when using a moderate and large intercept for hail, respectively. Simulated reflectivity distributions versus height were also improved versus radar in both cases compared to earlier 3-ice versions. The bin-based rain evaporation correction affected the squall line case more but did not change the overall agreement in reflectivity distributions.

  4. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries.

    PubMed

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  5. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries

    NASA Astrophysics Data System (ADS)

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  6. A Scheme for Understanding Group Processes in Problem-Based Learning

    ERIC Educational Resources Information Center

    Hammar Chiriac, Eva

    2008-01-01

    The purpose of this study was to identify, describe and interpret group processes occurring in tutorials in problem-based learning. Another aim was to investigate if a combination of Steiner's (Steiner, I. D. (1972). "Group process and productivity". New York: Academic Press.) theory of group work and Bion's (Bion, W. R. (1961). "Experiences in…

  7. A Multi-layer Dynamic Model for Coordination Based Group Decision Making in Water Resource Allocation and Scheduling

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Zhang, Xingnan; Li, Chenming; Wang, Jianying

    Management of group decision-making is an important issue in water source management development. In order to overcome the defects in lacking of effective communication and cooperation in the existing decision-making models, this paper proposes a multi-layer dynamic model for coordination in water resource allocation and scheduling based group decision making. By introducing the scheme-recognized cooperative satisfaction index and scheme-adjusted rationality index, the proposed model can solve the problem of poor convergence of multi-round decision-making process in water resource allocation and scheduling. Furthermore, the problem about coordination of limited resources-based group decision-making process can be solved based on the effectiveness of distance-based group of conflict resolution. The simulation results show that the proposed model has better convergence than the existing models.

  8. Coded excitation for infrared non-destructive testing of carbon fiber reinforced plastics.

    PubMed

    Mulaveesala, Ravibabu; Venkata Ghali, Subbarao

    2011-05-01

    This paper proposes a Barker coded excitation for defect detection using infrared non-destructive testing. Capability of the proposed excitation scheme is highlighted with recently introduced correlation based post processing approach and compared with the existing phase based analysis by taking the signal to noise ratio into consideration. Applicability of the proposed scheme has been experimentally validated on a carbon fiber reinforced plastic specimen containing flat bottom holes located at different depths.

  9. High-resolution multi-code implementation of unsteady Navier-Stokes flow solver based on paralleled overset adaptive mesh refinement and high-order low-dissipation hybrid schemes

    NASA Astrophysics Data System (ADS)

    Li, Gaohua; Fu, Xiang; Wang, Fuxin

    2017-10-01

    The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.

  10. Using Classical Reliability Models and Single Event Upset (SEU) Data to Determine Optimum Implementation Schemes for Triple Modular Redundancy (TMR) in SRAM-Based Field Programmable Gate Array (FPGA) Devices

    NASA Technical Reports Server (NTRS)

    Berg, M.; Kim, H.; Phan, A.; Seidleck, C.; LaBel, K.; Pellish, J.; Campola, M.

    2015-01-01

    Space applications are complex systems that require intricate trade analyses for optimum implementations. We focus on a subset of the trade process, using classical reliability theory and SEU data, to illustrate appropriate TMR scheme selection.

  11. Performance evaluation methodology for historical document image binarization.

    PubMed

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.

  12. Nagy-Soper subtraction scheme for multiparton final states

    NASA Astrophysics Data System (ADS)

    Chung, Cheng-Han; Robens, Tania

    2013-04-01

    In this work, we present the extension of an alternative subtraction scheme for next-to-leading order QCD calculations to the case of an arbitrary number of massless final state partons. The scheme is based on the splitting kernels of an improved parton shower and comes with a reduced number of final state momentum mappings. While a previous publication including the setup of the scheme has been restricted to cases with maximally two massless partons in the final state, we here provide the final state real emission and integrated subtraction terms for processes with any number of massless partons. We apply our scheme to three jet production at lepton colliders at next-to-leading order and present results for the differential C parameter distribution.

  13. Coherent receiver design based on digital signal processing in optical high-speed intersatellite links with M-phase-shift keying

    NASA Astrophysics Data System (ADS)

    Schaefer, Semjon; Gregory, Mark; Rosenkranz, Werner

    2016-11-01

    We present simulative and experimental investigations of different coherent receiver designs for high-speed optical intersatellite links. We focus on frequency offset (FO) compensation in homodyne and intradyne detection systems. The considered laser communication terminal uses an optical phase-locked loop (OPLL), which ensures stable homodyne detection. However, the hardware complexity increases with the modulation order. Therefore, we show that software-based intradyne detection is an attractive alternative for OPLL-based homodyne systems. Our approach is based on digital FO and phase noise compensation, in order to achieve a more flexible coherent detection scheme. Analytic results will further show the theoretical impact of the different detection schemes on the receiver sensitivity. Finally, we compare the schemes in terms of bit error ratio measurements and optimal receiver design.

  14. A Tree Based Self-routing Scheme for Mobility Support in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Kim, Young-Duk; Yang, Yeon-Mo; Kang, Won-Seok; Kim, Jin-Wook; An, Jinung

    Recently, WSNs (Wireless Sensor Networks) with mobile robot is a growing technology that offer efficient communication services for anytime and anywhere applications. However, the tiny sensor node has very limited network resources due to its low battery power, low data rate, node mobility, and channel interference constraint between neighbors. Thus, in this paper, we proposed a tree based self-routing protocol for autonomous mobile robots based on beacon mode and implemented in real test-bed environments. The proposed scheme offers beacon based real-time scheduling for reliable association process between parent and child nodes. In addition, it supports smooth handover procedure by reducing flooding overhead of control packets. Throughout the performance evaluation by using a real test-bed system and simulation, we illustrate that our proposed scheme demonstrates promising performance for wireless sensor networks with mobile robots.

  15. An advanced teaching scheme for integrating problem-based learning in control education

    NASA Astrophysics Data System (ADS)

    Juuso, Esko K.

    2018-03-01

    Engineering education needs to provide both theoretical knowledge and problem-solving skills. Many topics can be presented in lectures and computer exercises are good tools in teaching the skills. Learning by doing is combined with lectures to provide additional material and perspectives. The teaching scheme includes lectures, computer exercises, case studies, seminars and reports organized as a problem-based learning process. In the gradually refining learning material, each teaching method has its own role. The scheme, which has been used in teaching two 4th year courses, is beneficial for overall learning progress, especially in bilingual courses. The students become familiar with new perspectives and are ready to use the course material in application projects.

  16. ADAPTIVE TETRAHEDRAL GRID REFINEMENT AND COARSENING IN MESSAGE-PASSING ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hallberg, J.; Stagg, A.

    2000-10-01

    A grid refinement and coarsening scheme has been developed for tetrahedral and triangular grid-based calculations in message-passing environments. The element adaption scheme is based on an edge bisection of elements marked for refinement by an appropriate error indicator. Hash-table/linked-list data structures are used to store nodal and element formation. The grid along inter-processor boundaries is refined and coarsened consistently with the update of these data structures via MPI calls. The parallel adaption scheme has been applied to the solution of a transient, three-dimensional, nonlinear, groundwater flow problem. Timings indicate efficiency of the grid refinement process relative to the flow solvermore » calculations.« less

  17. Parallel programming of gradient-based iterative image reconstruction schemes for optical tomography.

    PubMed

    Hielscher, Andreas H; Bartel, Sebastian

    2004-02-01

    Optical tomography (OT) is a fast developing novel imaging modality that uses near-infrared (NIR) light to obtain cross-sectional views of optical properties inside the human body. A major challenge remains the time-consuming, computational-intensive image reconstruction problem that converts NIR transmission measurements into cross-sectional images. To increase the speed of iterative image reconstruction schemes that are commonly applied for OT, we have developed and implemented several parallel algorithms on a cluster of workstations. Static process distribution as well as dynamic load balancing schemes suitable for heterogeneous clusters and varying machine performances are introduced and tested. The resulting algorithms are shown to accelerate the reconstruction process to various degrees, substantially reducing the computation times for clinically relevant problems.

  18. Robotic Attention Processing And Its Application To Visual Guidance

    NASA Astrophysics Data System (ADS)

    Barth, Matthew; Inoue, Hirochika

    1988-03-01

    This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.

  19. Soft sensor based composition estimation and controller design for an ideal reactive distillation column.

    PubMed

    Vijaya Raghavan, S R; Radhakrishnan, T K; Srinivasan, K

    2011-01-01

    In this research work, the authors have presented the design and implementation of a recurrent neural network (RNN) based inferential state estimation scheme for an ideal reactive distillation column. Decentralized PI controllers are designed and implemented. The reactive distillation process is controlled by controlling the composition which has been estimated from the available temperature measurements using a type of RNN called Time Delayed Neural Network (TDNN). The performance of the RNN based state estimation scheme under both open loop and closed loop have been compared with a standard Extended Kalman filter (EKF) and a Feed forward Neural Network (FNN). The online training/correction has been done for both RNN and FNN schemes for every ten minutes whenever new un-trained measurements are available from a conventional composition analyzer. The performance of RNN shows better state estimation capability as compared to other state estimation schemes in terms of qualitative and quantitative performance indices. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  20. A Fast MEANSHIFT Algorithm-Based Target Tracking System

    PubMed Central

    Sun, Jian

    2012-01-01

    Tracking moving targets in complex scenes using an active video camera is a challenging task. Tracking accuracy and efficiency are two key yet generally incompatible aspects of a Target Tracking System (TTS). A compromise scheme will be studied in this paper. A fast mean-shift-based Target Tracking scheme is designed and realized, which is robust to partial occlusion and changes in object appearance. The physical simulation shows that the image signal processing speed is >50 frame/s. PMID:22969397

  1. Streaming Support for Data Intensive Cloud-Based Sequence Analysis

    PubMed Central

    Issa, Shadi A.; Kienzler, Romeo; El-Kalioby, Mohamed; Tonellato, Peter J.; Wall, Dennis; Bruggmann, Rémy; Abouelhoda, Mohamed

    2013-01-01

    Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of “resources-on-demand” and “pay-as-you-go”, scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client's site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation. PMID:23710461

  2. PSO-tuned PID controller for coupled tank system via priority-based fitness scheme

    NASA Astrophysics Data System (ADS)

    Jaafar, Hazriq Izzuan; Hussien, Sharifah Yuslinda Syed; Selamat, Nur Asmiza; Abidin, Amar Faiz Zainal; Aras, Mohd Shahrieel Mohd; Nasir, Mohamad Na'im Mohd; Bohari, Zul Hasrizal

    2015-05-01

    The industrial applications of Coupled Tank System (CTS) are widely used especially in chemical process industries. The overall process is require liquids to be pumped, stored in the tank and pumped again to another tank. Nevertheless, the level of liquid in tank need to be controlled and flow between two tanks must be regulated. This paper presents development of an optimal PID controller for controlling the desired liquid level of the CTS. Two method of Particle Swarm Optimization (PSO) algorithm will be tested in optimizing the PID controller parameters. These two methods of PSO are standard Particle Swarm Optimization (PSO) and Priority-based Fitness Scheme in Particle Swarm Optimization (PFPSO). Simulation is conducted within Matlab environment to verify the performance of the system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). It has been demonstrated that implementation of PSO via Priority-based Fitness Scheme (PFPSO) for this system is potential technique to control the desired liquid level and improve the system performances compared with standard PSO.

  3. Data exchange technology based on handshake protocol for industrial automation system

    NASA Astrophysics Data System (ADS)

    Astafiev, A. V.; Shardin, T. O.

    2018-05-01

    In the article, questions of data exchange technology based on the handshake protocol for industrial automation system are considered. The methods of organizing the technology in client-server applications are analyzed. In the process of work, the main threats of client-server applications that arise during the information interaction of users are indicated. Also, a comparative analysis of analogue systems was carried out, as a result of which the most suitable option was chosen for further use. The basic schemes for the operation of the handshake protocol are shown, as well as the general scheme of the implemented application, which describes the entire process of interaction between the client and the server.

  4. Spectroscopic techniques to study the immune response in human saliva

    NASA Astrophysics Data System (ADS)

    Nepomnyashchaya, E.; Savchenko, E.; Velichko, E.; Bogomaz, T.; Aksenov, E.

    2018-01-01

    Studies of the immune response dynamics by means of spectroscopic techniques, i.e., laser correlation spectroscopy and fluorescence spectroscopy, are described. The laser correlation spectroscopy is aimed at measuring sizes of particles in biological fluids. The fluorescence spectroscopy allows studying of the conformational and other structural changings in immune complex. We have developed a new scheme of a laser correlation spectrometer and an original signal processing algorithm. We have suggested a new fluorescence detection scheme based on a prism and an integrating pin diode. The developed system based on the spectroscopic techniques allows studies of complex process in human saliva and opens some prospects for an individual treatment of immune diseases.

  5. A Structure and Scheme for the Evaluation of Innovative Programs. The EPIC Brief, Issue No. 2.

    ERIC Educational Resources Information Center

    Objective evaluation of school programs is a process in which a school staff collects information used to provide feedback as to whether or not a given set of objectives has been met. The Evaluative Programs for Innovativ e Curriculums (EPIC) four-step scheme of objective evaluation is based on a three-dimensional structure of variables…

  6. Application of Cross-Correlation Greens Function Along With FDTD for Fast Computation of Envelope Correlation Coefficient Over Wideband for MIMO Antennas

    NASA Astrophysics Data System (ADS)

    Sarkar, Debdeep; Srivastava, Kumar Vaibhav

    2017-02-01

    In this paper, the concept of cross-correlation Green's functions (CGF) is used in conjunction with the finite difference time domain (FDTD) technique for calculation of envelope correlation coefficient (ECC) of any arbitrary MIMO antenna system over wide frequency band. Both frequency-domain (FD) and time-domain (TD) post-processing techniques are proposed for possible application with this FDTD-CGF scheme. The FDTD-CGF time-domain (FDTD-CGF-TD) scheme utilizes time-domain signal processing methods and exhibits significant reduction in ECC computation time as compared to the FDTD-CGF frequency domain (FDTD-CGF-FD) scheme, for high frequency-resolution requirements. The proposed FDTD-CGF based schemes can be applied for accurate and fast prediction of wideband ECC response, instead of the conventional scattering parameter based techniques which have several limitations. Numerical examples of the proposed FDTD-CGF techniques are provided for two-element MIMO systems involving thin-wire half-wavelength dipoles in parallel side-by-side as well as orthogonal arrangements. The results obtained from the FDTD-CGF techniques are compared with results from commercial electromagnetic solver Ansys HFSS, to verify the validity of proposed approach.

  7. An Anonymous User Authentication and Key Agreement Scheme Based on a Symmetric Cryptosystem in Wireless Sensor Networks

    PubMed Central

    Jung, Jaewook; Kim, Jiye; Choi, Younsung; Won, Dongho

    2016-01-01

    In wireless sensor networks (WSNs), a registered user can login to the network and use a user authentication protocol to access data collected from the sensor nodes. Since WSNs are typically deployed in unattended environments and sensor nodes have limited resources, many researchers have made considerable efforts to design a secure and efficient user authentication process. Recently, Chen et al. proposed a secure user authentication scheme using symmetric key techniques for WSNs. They claim that their scheme assures high efficiency and security against different types of attacks. After careful analysis, however, we find that Chen et al.’s scheme is still vulnerable to smart card loss attack and is susceptible to denial of service attack, since it is invalid for verification to simply compare an entered ID and a stored ID in smart card. In addition, we also observe that their scheme cannot preserve user anonymity. Furthermore, their scheme cannot quickly detect an incorrect password during login phase, and this flaw wastes both communication and computational overheads. In this paper, we describe how these attacks work, and propose an enhanced anonymous user authentication and key agreement scheme based on a symmetric cryptosystem in WSNs to address all of the aforementioned vulnerabilities in Chen et al.’s scheme. Our analysis shows that the proposed scheme improves the level of security, and is also more efficient relative to other related schemes. PMID:27537890

  8. A joint asymmetric watermarking and image encryption scheme

    NASA Astrophysics Data System (ADS)

    Boato, G.; Conotter, V.; De Natale, F. G. B.; Fontanari, C.

    2008-02-01

    Here we introduce a novel watermarking paradigm designed to be both asymmetric, i.e., involving a private key for embedding and a public key for detection, and commutative with a suitable encryption scheme, allowing both to cipher watermarked data and to mark encrypted data without interphering with the detection process. In order to demonstrate the effectiveness of the above principles, we present an explicit example where the watermarking part, based on elementary linear algebra, and the encryption part, exploiting a secret random permutation, are integrated in a commutative scheme.

  9. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  10. Where does the carbon go? A model–data intercomparison of vegetation carbon allocation and turnover processes at two temperate forest free-air CO2 enrichment sites

    PubMed Central

    De Kauwe, Martin G; Medlyn, Belinda E; Zaehle, Sönke; Walker, Anthony P; Dietze, Michael C; Wang, Ying-Ping; Luo, Yiqi; Jain, Atul K; El-Masri, Bassil; Hickler, Thomas; Wårlind, David; Weng, Ensheng; Parton, William J; Thornton, Peter E; Wang, Shusen; Prentice, I Colin; Asao, Shinichi; Smith, Benjamin; McCarthy, Heather R; Iversen, Colleen M; Hanson, Paul J; Warren, Jeffrey M; Oren, Ram; Norby, Richard J

    2014-01-01

    Elevated atmospheric CO2 concentration (eCO2) has the potential to increase vegetation carbon storage if increased net primary production causes increased long-lived biomass. Model predictions of eCO2 effects on vegetation carbon storage depend on how allocation and turnover processes are represented. We used data from two temperate forest free-air CO2 enrichment (FACE) experiments to evaluate representations of allocation and turnover in 11 ecosystem models. Observed eCO2 effects on allocation were dynamic. Allocation schemes based on functional relationships among biomass fractions that vary with resource availability were best able to capture the general features of the observations. Allocation schemes based on constant fractions or resource limitations performed less well, with some models having unintended outcomes. Few models represent turnover processes mechanistically and there was wide variation in predictions of tissue lifespan. Consequently, models did not perform well at predicting eCO2 effects on vegetation carbon storage. Our recommendations to reduce uncertainty include: use of allocation schemes constrained by biomass fractions; careful testing of allocation schemes; and synthesis of allocation and turnover data in terms of model parameters. Data from intensively studied ecosystem manipulation experiments are invaluable for constraining models and we recommend that such experiments should attempt to fully quantify carbon, water and nutrient budgets. PMID:24844873

  11. Train axle bearing fault detection using a feature selection scheme based multi-scale morphological filter

    NASA Astrophysics Data System (ADS)

    Li, Yifan; Liang, Xihui; Lin, Jianhui; Chen, Yuejian; Liu, Jianxin

    2018-02-01

    This paper presents a novel signal processing scheme, feature selection based multi-scale morphological filter (MMF), for train axle bearing fault detection. In this scheme, more than 30 feature indicators of vibration signals are calculated for axle bearings with different conditions and the features which can reflect fault characteristics more effectively and representatively are selected using the max-relevance and min-redundancy principle. Then, a filtering scale selection approach for MMF based on feature selection and grey relational analysis is proposed. The feature selection based MMF method is tested on diagnosis of artificially created damages of rolling bearings of railway trains. Experimental results show that the proposed method has a superior performance in extracting fault features of defective train axle bearings. In addition, comparisons are performed with the kurtosis criterion based MMF and the spectral kurtosis criterion based MMF. The proposed feature selection based MMF method outperforms these two methods in detection of train axle bearing faults.

  12. Optimal nonlinear information processing capacity in delay-based reservoir computers

    NASA Astrophysics Data System (ADS)

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2015-09-01

    Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.

  13. Optimal nonlinear information processing capacity in delay-based reservoir computers.

    PubMed

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2015-09-11

    Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.

  14. Refining and end use study of coal liquids II - linear programming analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowe, C.; Tam, S.

    1995-12-31

    A DOE-funded study is underway to determine the optimum refinery processing schemes for producing transportation fuels that will meet CAAA regulations from direct and indirect coal liquids. The study consists of three major parts: pilot plant testing of critical upgrading processes, linear programming analysis of different processing schemes, and engine emission testing of final products. Currently, fractions of a direct coal liquid produced form bituminous coal are being tested in sequence of pilot plant upgrading processes. This work is discussed in a separate paper. The linear programming model, which is the subject of this paper, has been completed for themore » petroleum refinery and is being modified to handle coal liquids based on the pilot plant test results. Preliminary coal liquid evaluation studies indicate that, if a refinery expansion scenario is adopted, then the marginal value of the coal liquid (over the base petroleum crude) is $3-4/bbl.« less

  15. Optimal nonlinear information processing capacity in delay-based reservoir computers

    PubMed Central

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2015-01-01

    Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature. PMID:26358528

  16. Optical authentication based on moiré effect of nonlinear gratings in phase space

    NASA Astrophysics Data System (ADS)

    Liao, Meihua; He, Wenqi; Wu, Jiachen; Lu, Dajiang; Liu, Xiaoli; Peng, Xiang

    2015-12-01

    An optical authentication scheme based on the moiré effect of nonlinear gratings in phase space is proposed. According to the phase function relationship of the moiré effect in phase space, an arbitrary authentication image can be encoded into two nonlinear gratings which serve as the authentication lock (AL) and the authentication key (AK). The AL is stored in the authentication system while the AK is assigned to the authorized user. The authentication procedure can be performed using an optoelectronic approach, while the design process is accomplished by a digital approach. Furthermore, this optical authentication scheme can be extended for multiple users with different security levels. The proposed scheme can not only verify the legality of a user identity, but can also discriminate and control the security levels of legal users. Theoretical analysis and simulation experiments are provided to verify the feasibility and effectiveness of the proposed scheme.

  17. Quantum Watermarking Scheme Based on INEQR

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Zhou, Yang; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou

    2018-04-01

    Quantum watermarking technology protects copyright by embedding invisible quantum signal in quantum multimedia data. In this paper, a watermarking scheme based on INEQR was presented. Firstly, the watermark image is extended to achieve the requirement of embedding carrier image. Secondly, the swap and XOR operation is used on the processed pixels. Since there is only one bit per pixel, XOR operation can achieve the effect of simple encryption. Thirdly, both the watermark image extraction and embedding operations are described, where the key image, swap operation and LSB algorithm are used. When the embedding is made, the binary image key is changed. It means that the watermark has been embedded. Of course, if the watermark image is extracted, the key's state need detected. When key's state is |1>, this extraction operation is carried out. Finally, for validation of the proposed scheme, both the Signal-to-noise ratio (PSNR) and the security of the scheme are analyzed.

  18. Economic growth and biodiversity loss in an age of tradable permits.

    PubMed

    Rosales, Jon

    2006-08-01

    Tradable permits are increasingly becoming part of environmental policy and conservation programs. The efficacy of tradable permit schemes in addressing the root cause of environmental decline-economic growth--will not be achieved unless the schemes cap economic activity based on ecological thresholds. Lessons can be learned from the largest tradable permit scheme to date, emissions trading now being implemented with the Kyoto Protocol. The Kyoto Protocol caps neither greenhouse gas emissions at a level that will achieve climate stability nor economic growth. If patterned after the Kyoto Protocol, cap-and-trade schemes for conservation will not ameliorate biodiversity loss either because they will not address economic growth. In response to these failures to cap economic growth, professional organizations concerned about biodiversity conservation should release position statements on economic growth and ecological thresholds. The statements can then be used by policy makers to infuse these positions into the local, national, and international environmental science-policy process when these schemes are being developed. Infusing language into the science-policy process that calls for capping economic activity based on ecological thresholds represents sound conservation science. Most importantly, position statements have a greater potential to ameliorate biodiversity loss if they are created and released than if this information remains within professional organizations because there is the potential for these ideas to be enacted into law and policy.

  19. A prototype of mammography CADx scheme integrated to imaging quality evaluation techniques

    NASA Astrophysics Data System (ADS)

    Schiabel, Homero; Matheus, Bruno R. N.; Angelo, Michele F.; Patrocínio, Ana Claudia; Ventura, Liliane

    2011-03-01

    As all women over the age of 40 are recommended to perform mammographic exams every two years, the demands on radiologists to evaluate mammographic images in short periods of time has increased considerably. As a tool to improve quality and accelerate analysis CADe/Dx (computer-aided detection/diagnosis) schemes have been investigated, but very few complete CADe/Dx schemes have been developed and most are restricted to detection and not diagnosis. The existent ones usually are associated to specific mammographic equipment (usually DR), which makes them very expensive. So this paper describes a prototype of a complete mammography CADx scheme developed by our research group integrated to an imaging quality evaluation process. The basic structure consists of pre-processing modules based on image acquisition and digitization procedures (FFDM, CR or film + scanner), a segmentation tool to detect clustered microcalcifications and suspect masses and a classification scheme, which evaluates as the presence of microcalcifications clusters as well as possible malignant masses based on their contour. The aim is to provide enough information not only on the detected structures but also a pre-report with a BI-RADS classification. At this time the system is still lacking an interface integrating all the modules. Despite this, it is functional as a prototype for clinical practice testing, with results comparable to others reported in literature.

  20. Smith predictor with sliding mode control for processes with large dead times

    NASA Astrophysics Data System (ADS)

    Mehta, Utkal; Kaya, İbrahim

    2017-11-01

    The paper discusses the Smith Predictor scheme with Sliding Mode Controller (SP-SMC) for processes with large dead times. This technique gives improved load-disturbance rejection with optimum input control signal variations. A power rate reaching law is incorporated in the sporadic part of sliding mode control such that the overall performance recovers meaningfully. The proposed scheme obtains parameter values by satisfying a new performance index which is based on biobjective constraint. In simulation study, the efficiency of the method is evaluated for robustness and transient performance over reported techniques.

  1. Performance of the Goddard Multiscale Modeling Framework with Goddard Ice Microphysical Schemes

    NASA Technical Reports Server (NTRS)

    Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Matsui, Toshihisa; Li, J.-L.; Mohr, Karen I.; Skofronick-Jackson, Gail M.; Peters-Lidard, Christa D.

    2016-01-01

    The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has become a new approach for climate modeling. The embedded CRMs make it possible to apply CRM-based cloud microphysics directly within a GCM. However, most such schemes have never been tested in a global environment for long-term climate simulation. The benefits of using an MMF to evaluate rigorously and improve microphysics schemes are here demonstrated. Four one-moment microphysical schemes are implemented into the Goddard MMF and their results validated against three CloudSat/CALIPSO cloud ice products and other satellite data. The new four-class (cloud ice, snow, graupel, and frozen drops/hail) ice scheme produces a better overall spatial distribution of cloud ice amount, total cloud fractions, net radiation, and total cloud radiative forcing than earlier three-class ice schemes, with biases within the observational uncertainties. Sensitivity experiments are conducted to examine the impact of recently upgraded microphysical processes on global hydrometeor distributions. Five processes dominate the global distributions of cloud ice and snow amount in long-term simulations: (1) allowing for ice supersaturation in the saturation adjustment, (2) three additional correction terms in the depositional growth of cloud ice to snow, (3) accounting for cloud ice fall speeds, (4) limiting cloud ice particle size, and (5) new size-mapping schemes for snow and graupel. Despite the cloud microphysics improvements, systematic errors associated with subgrid processes, cyclic lateral boundaries in the embedded CRMs, and momentum transport remain and will require future improvement.

  2. Robust watermarking scheme for binary images using a slice-based large-cluster algorithm with a Hamming Code

    NASA Astrophysics Data System (ADS)

    Chen, Wen-Yuan; Liu, Chen-Chung

    2006-01-01

    The problems with binary watermarking schemes are that they have only a small amount of embeddable space and are not robust enough. We develop a slice-based large-cluster algorithm (SBLCA) to construct a robust watermarking scheme for binary images. In SBLCA, a small-amount cluster selection (SACS) strategy is used to search for a feasible slice in a large-cluster flappable-pixel decision (LCFPD) method, which is used to search for the best location for concealing a secret bit from a selected slice. This method has four major advantages over the others: (a) SBLCA has a simple and effective decision function to select appropriate concealment locations, (b) SBLCA utilizes a blind watermarking scheme without the original image in the watermark extracting process, (c) SBLCA uses slice-based shuffling capability to transfer the regular image into a hash state without remembering the state before shuffling, and finally, (d) SBLCA has enough embeddable space that every 64 pixels could accommodate a secret bit of the binary image. Furthermore, empirical results on test images reveal that our approach is a robust watermarking scheme for binary images.

  3. BossPro: a biometrics-based obfuscation scheme for software protection

    NASA Astrophysics Data System (ADS)

    Kuseler, Torben; Lami, Ihsan A.; Al-Assam, Hisham

    2013-05-01

    This paper proposes to integrate biometric-based key generation into an obfuscated interpretation algorithm to protect authentication application software from illegitimate use or reverse-engineering. This is especially necessary for mCommerce because application programmes on mobile devices, such as Smartphones and Tablet-PCs are typically open for misuse by hackers. Therefore, the scheme proposed in this paper ensures that a correct interpretation / execution of the obfuscated program code of the authentication application requires a valid biometric generated key of the actual person to be authenticated, in real-time. Without this key, the real semantics of the program cannot be understood by an attacker even if he/she gains access to this application code. Furthermore, the security provided by this scheme can be a vital aspect in protecting any application running on mobile devices that are increasingly used to perform business/financial or other security related applications, but are easily lost or stolen. The scheme starts by creating a personalised copy of any application based on the biometric key generated during an enrolment process with the authenticator as well as a nuance created at the time of communication between the client and the authenticator. The obfuscated code is then shipped to the client's mobile devise and integrated with real-time biometric extracted data of the client to form the unlocking key during execution. The novelty of this scheme is achieved by the close binding of this application program to the biometric key of the client, thus making this application unusable for others. Trials and experimental results on biometric key generation, based on client's faces, and an implemented scheme prototype, based on the Android emulator, prove the concept and novelty of this proposed scheme.

  4. Atomic entanglement purification and concentration using coherent state input-output process in low-Q cavity QED regime.

    PubMed

    Cao, Cong; Wang, Chuan; He, Ling-Yan; Zhang, Ru

    2013-02-25

    We investigate an atomic entanglement purification protocol based on the coherent state input-output process by working in low-Q cavity in the atom-cavity intermediate coupling region. The information of entangled states are encoded in three-level configured single atoms confined in separated one-side optical micro-cavities. Using the coherent state input-output process, we design a two-qubit parity check module (PCM), which allows the quantum nondemolition measurement for the atomic qubits, and show its use for remote parities to distill a high-fidelity atomic entangled ensemble from an initial mixed state ensemble nonlocally. The proposed scheme can further be used for unknown atomic states entanglement concentration. Also by exploiting the PCM, we describe a modified scheme for atomic entanglement concentration by introducing ancillary single atoms. As the coherent state input-output process is robust and scalable in realistic applications, and the detection in the PCM is based on the intensity of outgoing coherent state, the present protocols may be widely used in large-scaled and solid-based quantum repeater and quantum information processing.

  5. The scheme of LLSST based on inter-satellite link for planet gravity field measurement in deep-space mission

    NASA Astrophysics Data System (ADS)

    Yang, Yikang; Li, Xue; Liu, Lei

    2009-12-01

    Gravity field measurement for the interested planets and their moos in solar system, such as Luna and Mars, is one important task in the next step of deep-space mission. In this paper, Similar to GRACE mission, LLSST and DOWR technology of common-orbit master-slave satellites around task planet is inherited in this scheme. Furthermore, by intersatellite 2-way UQPSK-DSSS link, time synchronization and data processing are implemented autonomously by masterslave satellites instead of GPS and ground facilities supporting system. Conclusion is derived that the ISL DOWR based on 2-way incoherent time synchronization has the same precise level to GRACE DOWR based on GPS time synchronization. Moreover, because of inter-satellite link, the proposed scheme is rather autonomous for gravity field measurement of the task planet in deep-space mission.

  6. A chaotic secure communication scheme using fractional chaotic systems based on an extended fractional Kalman filter

    NASA Astrophysics Data System (ADS)

    Kiani-B, Arman; Fallahi, Kia; Pariz, Naser; Leung, Henry

    2009-03-01

    In recent years chaotic secure communication and chaos synchronization have received ever increasing attention. In this paper, for the first time, a fractional chaotic communication method using an extended fractional Kalman filter is presented. The chaotic synchronization is implemented by the EFKF design in the presence of channel additive noise and processing noise. Encoding chaotic communication achieves a satisfactory, typical secure communication scheme. In the proposed system, security is enhanced based on spreading the signal in frequency and encrypting it in time domain. In this paper, the main advantages of using fractional order systems, increasing nonlinearity and spreading the power spectrum are highlighted. To illustrate the effectiveness of the proposed scheme, a numerical example based on the fractional Lorenz dynamical system is presented and the results are compared to the integer Lorenz system.

  7. New-Sum: A Novel Online ABFT Scheme For General Iterative Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram

    Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less

  8. Comparison of Grouping Schemes for Exposure to Total Dust in Cement Factories in Korea.

    PubMed

    Koh, Dong-Hee; Kim, Tae-Woo; Jang, Seung Hee; Ryu, Hyang-Woo; Park, Donguk

    2015-08-01

    The purpose of this study was to evaluate grouping schemes for exposure to total dust in cement industry workers using non-repeated measurement data. In total, 2370 total dust measurements taken from nine Portland cement factories in 1995-2009 were analyzed. Various grouping schemes were generated based on work process, job, factory, or average exposure. To characterize variance components of each grouping scheme, we developed mixed-effects models with a B-spline time trend incorporated as fixed effects and a grouping variable incorporated as a random effect. Using the estimated variance components, elasticity was calculated. To compare the prediction performances of different grouping schemes, 10-fold cross-validation tests were conducted, and root mean squared errors and pooled correlation coefficients were calculated for each grouping scheme. The five exposure groups created a posteriori by ranking job and factory combinations according to average dust exposure showed the best prediction performance and highest elasticity among various grouping schemes. Our findings suggest a grouping method based on ranking of job, and factory combinations would be the optimal choice in this population. Our grouping method may aid exposure assessment efforts in similar occupational settings, minimizing the misclassification of exposures. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  9. Fault tolerant control of multivariable processes using auto-tuning PID controller.

    PubMed

    Yu, Ding-Li; Chang, T K; Yu, Ding-Wen

    2005-02-01

    Fault tolerant control of dynamic processes is investigated in this paper using an auto-tuning PID controller. A fault tolerant control scheme is proposed composing an auto-tuning PID controller based on an adaptive neural network model. The model is trained online using the extended Kalman filter (EKF) algorithm to learn system post-fault dynamics. Based on this model, the PID controller adjusts its parameters to compensate the effects of the faults, so that the control performance is recovered from degradation. The auto-tuning algorithm for the PID controller is derived with the Lyapunov method and therefore, the model predicted tracking error is guaranteed to converge asymptotically. The method is applied to a simulated two-input two-output continuous stirred tank reactor (CSTR) with various faults, which demonstrate the applicability of the developed scheme to industrial processes.

  10. 31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...

  11. 31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...

  12. 31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...

  13. 31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...

  14. 31 CFR 592.301 - Controlled through the Kimberley Process Certification Scheme.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Process Certification Scheme. 592.301 Section 592.301 Money and Finance: Treasury Regulations Relating to... Certification Scheme. (a) Except as otherwise provided in paragraph (b) of this section, the term controlled through the Kimberley Process Certification Scheme refers to the following requirements that apply, as...

  15. Renewing membership in three community-based health insurance schemes in rural India.

    PubMed

    Panda, Pradeep; Chakraborty, Arpita; Raza, Wameq; Bedi, Arjun S

    2016-12-01

    Low renewal rate is a key challenge facing the sustainability of community-based health insurance (CBHI) schemes. While there is a large literature on initial enrolment into such schemes, there is limited evidence on the factors that impede renewal. This article uses longitudinal data to analyse what determines renewal, both 1 year and 2 years after the introduction of three CBHI schemes, which have been operating in rural Bihar and Uttar Pradesh since 2011. We find that initial scheme uptake is ∼23-24% and that 2 years after scheme operation, only ∼20% of the initial enrolees maintain their membership. A household's socio-economic status does not seem to play a large role in impeding renewal. In some instances, a greater understanding of the scheme boosts renewal. The link between health status and use of health care in maintaining renewal is mixed. The clearest effect is that individuals living in households that have received benefits from the scheme are substantially more likely to renew their contracts. We conclude that the low retention rates may be attributed to limited benefit packages, slow claims processing times and the gap between the amounts claimed and amounts paid out by insurance. © The Author 2016. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. The Portsmouth-based glaucoma refinement scheme: a role for virtual clinics in the future?

    PubMed

    Trikha, S; Macgregor, C; Jeffery, M; Kirwan, J

    2012-10-01

    Glaucoma referrals continue to impart a significant burden on Hospital Eye Services (HES), with a large proportion of these false positives. To evaluate the Portsmouth glaucoma scheme, utilising virtual clinics, digital technology, and community optometrists to streamline glaucoma referrals. The stages of the patient trail were mapped and, at each step of the process, 100 consecutive patient decisions were identified. The diagnostic outcomes of 50 consecutive patients referred from the refinement scheme to the HES were identified. A total of 76% of 'glaucoma' referrals were suitable for the refinement scheme. Overall, 94% of disc images were gradeable in the virtual clinic. In all, 11% of patients 'attending' the virtual clinic were accepted into HES, with 89% being discharged for community follow-up. Of referrals accepted into HES, the positive predictive value (glaucoma/ocular hypertension/suspect) was 0.78 vs 0.37 in the predating 'unrefined' scheme (95% CI 0.65-0.87). The scheme has released 1400 clinic slots/year for HES, and has produced a £244 200/year cost saving for Portsmouth Hospitals' Trust. The refinement scheme is streamlining referrals and increasing the positive predictive rate in the diagnosis of glaucoma, glaucoma suspect or ocular hypertension. This consultant-led practice-based commissioning scheme, if adopted widely, is likely to incur a significant cost saving while maintaining high quality of care within the NHS.

  17. On-chip learning of hyper-spectral data for real time target recognition

    NASA Technical Reports Server (NTRS)

    Duong, T. A.; Daud, T.; Thakoor, A.

    2000-01-01

    As the focus of our present paper, we have used the cascade error projection (CEP) learning algorithm (shown to be hardware-implementable) with on-chip learning (OCL) scheme to obtain three orders of magnitude speed-up in target recognition compared to software-based learning schemes. Thus, it is shown, real time learning as well as data processing for target recognition can be achieved.

  18. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    PubMed

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Current use of impact models for agri-environment schemes and potential for improvements of policy design and assessment.

    PubMed

    Primdahl, Jørgen; Vesterager, Jens Peter; Finn, John A; Vlahos, George; Kristensen, Lone; Vejre, Henrik

    2010-06-01

    Agri-Environment Schemes (AES) to maintain or promote environmentally-friendly farming practices were implemented on about 25% of all agricultural land in the EU by 2002. This article analyses and discusses the actual and potential use of impact models in supporting the design, implementation and evaluation of AES. Impact models identify and establish the causal relationships between policy objectives and policy outcomes. We review and discuss the role of impact models at different stages in the AES policy process, and present results from a survey of impact models underlying 60 agri-environmental schemes in seven EU member states. We distinguished among three categories of impact models (quantitative, qualitative or common sense), depending on the degree of evidence in the formal scheme description, additional documents, or key person interviews. The categories of impact models used mainly depended on whether scheme objectives were related to natural resources, biodiversity or landscape. A higher proportion of schemes dealing with natural resources (primarily water) were based on quantitative impact models, compared to those concerned with biodiversity or landscape. Schemes explicitly targeted either on particular parts of individual farms or specific areas tended to be based more on quantitative impact models compared to whole-farm schemes and broad, horizontal schemes. We conclude that increased and better use of impact models has significant potential to improve efficiency and effectiveness of AES. (c) 2009 Elsevier Ltd. All rights reserved.

  20. Comparison of Modeling Approaches for Carbon Partitioning: Impact on Estimates of Global Net Primary Production and Equilibrium Biomass of Woody Vegetation from MODIS GPP

    NASA Astrophysics Data System (ADS)

    Ise, T.; Litton, C. M.; Giardina, C. P.; Ito, A.

    2009-12-01

    Plant partitioning of carbon (C) to above- vs. belowground, to growth vs. respiration, and to short vs. long lived tissues exerts a large influence on ecosystem structure and function with implications for the global C budget. Importantly, outcomes of process-based terrestrial vegetation models are likely to vary substantially with different C partitioning algorithms. However, controls on C partitioning patterns remain poorly quantified, and studies have yielded variable, and at times contradictory, results. A recent meta-analysis of forest studies suggests that the ratio of net primary production (NPP) and gross primary production (GPP) is fairly conservative across large scales. To illustrate the effect of this unique meta-analysis-based partitioning scheme (MPS), we compared an application of MPS to a terrestrial satellite-based (MODIS) GPP to estimate NPP vs. two global process-based vegetation models (Biome-BGC and VISIT) to examine the influence of C partitioning on C budgets of woody plants. Due to the temperature dependence of maintenance respiration, NPP/GPP predicted by the process-based models increased with latitude while the ratio remained constant with MPS. Overall, global NPP estimated with MPS was 17 and 27% lower than the process-based models for temperate and boreal biomes, respectively, with smaller differences in the tropics. Global equilibrium biomass of woody plants was then calculated from the NPP estimates and tissue turnover rates from VISIT. Since turnover rates differed greatly across tissue types (i.e., metabolically active vs. structural), global equilibrium biomass estimates were sensitive to the partitioning scheme employed. The MPS estimate of global woody biomass was 7-21% lower than that of the process-based models. In summary, we found that model output for NPP and equilibrium biomass was quite sensitive to the choice of C partitioning schemes. Carbon use efficiency (CUE; NPP/GPP) by forest biome and the globe. Values are means for 2001-2006.

  1. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  2. Nanopositioning for polarimetric characterization.

    PubMed

    Qureshi, Naser; Kolokoltsev, Oleg V; Ortega-Martínez, Roberto; Ordoñez-Romero, C L

    2008-12-01

    A positioning system with approximately nanometer resolution has been developed based on a new implementation of a motor-driven screw scheme. In contrast to conventional positioning systems based on piezoelectric elements, this system shows remarkably low levels of drift and vibration, and eliminates the need for position feedback during typical data acquisition processes. During positioning or scanning processes, non-repeatability and hysteresis problems inherent in mechanical positioning systems are greatly reduced using a software feedback scheme. As a result, we are able to demonstrate an average mechanical resolution of 1.45 nm and near diffraction-limited imaging using scanning optical microscopy. We propose this approach to nanopositioning as a readily accessible alternative enabling high spatial resolution scanning probe characterization (e.g., polarimetry) and provide practical details for its implementation.

  3. Design and implementation of adaptive PI control schemes for web tension control in roll-to-roll (R2R) manufacturing.

    PubMed

    Raul, Pramod R; Pagilla, Prabhakar R

    2015-05-01

    In this paper, two adaptive Proportional-Integral (PI) control schemes are designed and discussed for control of web tension in Roll-to-Roll (R2R) manufacturing systems. R2R systems are used to transport continuous materials (called webs) on rollers from the unwind roll to the rewind roll. Maintaining web tension at the desired value is critical to many R2R processes such as printing, coating, lamination, etc. Existing fixed gain PI tension control schemes currently used in industrial practice require extensive tuning and do not provide the desired performance for changing operating conditions and material properties. The first adaptive PI scheme utilizes the model reference approach where the controller gains are estimated based on matching of the actual closed-loop tension control systems with an appropriately chosen reference model. The second adaptive PI scheme utilizes the indirect adaptive control approach together with relay feedback technique to automatically initialize the adaptive PI gains. These adaptive tension control schemes can be implemented on any R2R manufacturing system. The key features of the two adaptive schemes is that their designs are simple for practicing engineers, easy to implement in real-time, and automate the tuning process. Extensive experiments are conducted on a large experimental R2R machine which mimics many features of an industrial R2R machine. These experiments include trials with two different polymer webs and a variety of operating conditions. Implementation guidelines are provided for both adaptive schemes. Experimental results comparing the two adaptive schemes and a fixed gain PI tension control scheme used in industrial practice are provided and discussed. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Automated identification of abnormal metaphase chromosome cells for the detection of chronic myeloid leukemia using microscopic images

    NASA Astrophysics Data System (ADS)

    Wang, Xingwei; Zheng, Bin; Li, Shibo; Mulvihill, John J.; Chen, Xiaodong; Liu, Hong

    2010-07-01

    Karyotyping is an important process to classify chromosomes into standard classes and the results are routinely used by the clinicians to diagnose cancers and genetic diseases. However, visual karyotyping using microscopic images is time-consuming and tedious, which reduces the diagnostic efficiency and accuracy. Although many efforts have been made to develop computerized schemes for automated karyotyping, no schemes can get be performed without substantial human intervention. Instead of developing a method to classify all chromosome classes, we develop an automatic scheme to detect abnormal metaphase cells by identifying a specific class of chromosomes (class 22) and prescreen for suspicious chronic myeloid leukemia (CML). The scheme includes three steps: (1) iteratively segment randomly distributed individual chromosomes, (2) process segmented chromosomes and compute image features to identify the candidates, and (3) apply an adaptive matching template to identify chromosomes of class 22. An image data set of 451 metaphase cells extracted from bone marrow specimens of 30 positive and 30 negative cases for CML is selected to test the scheme's performance. The overall case-based classification accuracy is 93.3% (100% sensitivity and 86.7% specificity). The results demonstrate the feasibility of applying an automated scheme to detect or prescreen the suspicious cancer cases.

  5. Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2015-11-01

    The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.

  6. Quantum image pseudocolor coding based on the density-stratified method

    NASA Astrophysics Data System (ADS)

    Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na

    2015-05-01

    Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.

  7. Fast generation of three-qubit Greenberger-Horne-Zeilinger state based on the Lewis-Riesenfeld invariants in coupled cavities.

    PubMed

    Huang, Xiao-Bin; Chen, Ye-Hong; Wang, Zhe

    2016-05-24

    In this paper, we propose an efficient scheme to fast generate three-qubit Greenberger-Horne-Zeilinger (GHZ) state by constructing shortcuts to adiabatic passage (STAP) based on the "Lewis-Riesenfeld (LR) invariants" in spatially separated cavities connected by optical fibers. Numerical simulations illustrate that the scheme is not only fast, but robust against the decoherence caused by atomic spontaneous emission, cavity losses and the fiber photon leakages. This might be useful to realize fast and noise-resistant quantum information processing for multi-qubit systems.

  8. Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Strum, R.; Stiles, D.; Long, C.; Rakhman, A.; Blokland, W.; Winder, D.; Riemer, B.; Wendel, M.

    2018-03-01

    We describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. The proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry-Perot sensors for measurement of strains and vibrations.

  9. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  10. Considerations and techniques for incorporating remotely sensed imagery into the land resource management process.

    NASA Technical Reports Server (NTRS)

    Brooner, W. G.; Nichols, D. A.

    1972-01-01

    Development of a scheme for utilizing remote sensing technology in an operational program for regional land use planning and land resource management program applications. The scheme utilizes remote sensing imagery as one of several potential inputs to derive desired and necessary data, and considers several alternative approaches to the expansion and/or reduction and analysis of data, using automated data handling techniques. Within this scheme is a five-stage program development which includes: (1) preliminary coordination, (2) interpretation and encoding, (3) creation of data base files, (4) data analysis and generation of desired products, and (5) applications.

  11. Scalable quantum computation scheme based on quantum-actuated nuclear-spin decoherence-free qubits

    NASA Astrophysics Data System (ADS)

    Dong, Lihong; Rong, Xing; Geng, Jianpei; Shi, Fazhan; Li, Zhaokai; Duan, Changkui; Du, Jiangfeng

    2017-11-01

    We propose a novel theoretical scheme of quantum computation. Nuclear spin pairs are utilized to encode decoherence-free (DF) qubits. A nitrogen-vacancy center serves as a quantum actuator to initialize, readout, and quantum control the DF qubits. The realization of CNOT gates between two DF qubits are also presented. Numerical simulations show high fidelities of all these processes. Additionally, we discuss the potential of scalability. Our scheme reduces the challenge of classical interfaces from controlling and observing complex quantum systems down to a simple quantum actuator. It also provides a novel way to handle complex quantum systems.

  12. COLA: Optimizing Stream Processing Applications via Graph Partitioning

    NASA Astrophysics Data System (ADS)

    Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra

    In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.

  13. Methodology of problem-based learning engineering and technology and of its implementation with modern computer resources

    NASA Astrophysics Data System (ADS)

    Lebedev, A. A.; Ivanova, E. G.; Komleva, V. A.; Klokov, N. M.; Komlev, A. A.

    2017-01-01

    The considered method of learning the basics of microelectronic circuits and systems amplifier enables one to understand electrical processes deeper, to understand the relationship between static and dynamic characteristics and, finally, bring the learning process to the cognitive process. The scheme of problem-based learning can be represented by the following sequence of procedures: the contradiction is perceived and revealed; the cognitive motivation is provided by creating a problematic situation (the mental state of the student), moving the desire to solve the problem, to raise the question "why?", the hypothesis is made; searches for solutions are implemented; the answer is looked for. Due to the complexity of architectural schemes in the work the modern methods of computer analysis and synthesis are considered in the work. Examples of engineering by students in the framework of students' scientific and research work of analog circuits with improved performance based on standard software and software developed at the Department of Microelectronics MEPhI.

  14. Evaluating Payments for Environmental Services: Methodological Challenges

    PubMed Central

    2016-01-01

    Over the last fifteen years, Payments for Environmental Services (PES) schemes have become very popular environmental policy instruments, but the academic literature has begun to question their additionality. The literature attempts to estimate the causal effect of these programs by applying impact evaluation (IE) techniques. However, PES programs are complex instruments and IE methods cannot be directly applied without adjustments. Based on a systematic review of the literature, this article proposes a framework for the methodological process of designing an IE for PES schemes. It revises and discusses the methodological choices at each step of the process and proposes guidelines for practitioners. PMID:26910850

  15. Centrifuge: rapid and sensitive classification of metagenomic sequences

    PubMed Central

    Song, Li; Breitwieser, Florian P.

    2016-01-01

    Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. PMID:27852649

  16. Optical image encryption system using nonlinear approach based on biometric authentication

    NASA Astrophysics Data System (ADS)

    Verma, Gaurav; Sinha, Aloka

    2017-07-01

    A nonlinear image encryption scheme using phase-truncated Fourier transform (PTFT) and natural logarithms is proposed in this paper. With the help of the PTFT, the input image is truncated into phase and amplitude parts at the Fourier plane. The phase-only information is kept as the secret key for the decryption, and the amplitude distribution is modulated by adding an undercover amplitude random mask in the encryption process. Furthermore, the encrypted data is kept hidden inside the face biometric-based phase mask key using the base changing rule of logarithms for secure transmission. This phase mask is generated through principal component analysis. Numerical experiments show the feasibility and the validity of the proposed nonlinear scheme. The performance of the proposed scheme has been studied against the brute force attacks and the amplitude-phase retrieval attack. Simulation results are presented to illustrate the enhanced system performance with desired advantages in comparison to the linear cryptosystem.

  17. GPS data processing of networks with mixed single- and dual-frequency receivers for deformation monitoring

    NASA Astrophysics Data System (ADS)

    Zou, X.; Deng, Z.; Ge, M.; Dick, G.; Jiang, W.; Liu, J.

    2010-07-01

    In order to obtain crustal deformations of higher spatial resolution, existing GPS networks must be densified. This densification can be carried out using single-frequency receivers at moderate costs. However, ionospheric delay handling is required in the data processing. We adapt the Satellite-specific Epoch-differenced Ionospheric Delay model (SEID) for GPS networks with mixed single- and dual-frequency receivers. The SEID model is modified to utilize the observations from the three nearest dual-frequency reference stations in order to avoid contaminations from more remote stations. As data of only three stations are used, an efficient missing data constructing approach with polynomial fitting is implemented to minimize data losses. Data from large scale reference networks extended with single-frequency receivers can now be processed, based on the adapted SEID model. A new data processing scheme is developed in order to make use of existing GPS data processing software packages without any modifications. This processing scheme is evaluated using a sub-network of the German SAPOS network. The results verify that the new scheme provides an efficient way to densify existing GPS networks with single-frequency receivers.

  18. A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks.

    PubMed

    Gui, Jinsong; Zhou, Kai; Xiong, Naixue

    2016-09-25

    Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude.

  19. A Novel DFT-Based DOA Estimation by a Virtual Array Extension Using Simple Multiplications for FMCW Radar

    PubMed Central

    Kim, Bongseok; Kim, Sangdong; Lee, Jonghun

    2018-01-01

    We propose a novel discrete Fourier transform (DFT)-based direction of arrival (DOA) estimation by a virtual array extension using simple multiplications for frequency modulated continuous wave (FMCW) radar. DFT-based DOA estimation is usually employed in radar systems because it provides the advantage of low complexity for real-time signal processing. In order to enhance the resolution of DOA estimation or to decrease the missing detection probability, it is essential to have a considerable number of channel signals. However, due to constraints of space and cost, it is not easy to increase the number of channel signals. In order to address this issue, we increase the number of effective channel signals by generating virtual channel signals using simple multiplications of the given channel signals. The increase in channel signals allows the proposed scheme to detect DOA more accurately than the conventional scheme while using the same number of channel signals. Simulation results show that the proposed scheme achieves improved DOA estimation compared to the conventional DFT-based method. Furthermore, the effectiveness of the proposed scheme in a practical environment is verified through the experiment. PMID:29758016

  20. A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks

    PubMed Central

    Gui, Jinsong; Zhou, Kai; Xiong, Naixue

    2016-01-01

    Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude. PMID:27681731

  1. LSB-based Steganography Using Reflected Gray Code for Color Quantum Images

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Lu, Aiping

    2018-02-01

    At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.

  2. [Utility of conceptual schemes and mental maps on the teaching-learning process of residents in pediatrics].

    PubMed

    Cruza, Norberto Sotelo; Fierros, Luis E

    2006-01-01

    The present study was done at the internal medicine service oft he Hospital lnfantil in the State of Sonora, Mexico. We tried to address the question of the use of conceptual schemes and mind maps and its impact on the teaching-learning-evaluation process among medical residents. Analyze the effects of conceptual schemes, and mind maps as a teaching and evaluation tool and compare them with multiple choice exams among Pediatric residents. Twenty two residents (RI, RII, RIII)on service rotation during six months were assessed initially, followed by a lecture on a medical subject. Conceptual schemes and mind maps were then introduced as a teaching-learning-evaluation instrument. Comprehension impact and comparison with a standard multiple choice evaluation was done. The statistical package (JMP version 5, SAS inst. 2004) was used. We noted that when we used conceptual schemes and mind mapping, learning improvement was noticeable among the three groups of residents (P < 0.001) and constitutes a better evaluation tool when compared with multiple choice exams (P < 0.0005). Based on our experience we recommend the use of this educational technique for medical residents in training.

  3. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Hixon, Duane; Sankar, L. N.

    1993-01-01

    During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.

  4. Implementation of orthogonal frequency division multiplexing (OFDM) and advanced signal processing for elastic optical networking in accordance with networking and transmission constraints

    NASA Astrophysics Data System (ADS)

    Johnson, Stanley

    An increasing adoption of digital signal processing (DSP) in optical fiber telecommunication has brought to the fore several interesting DSP enabled modulation formats. One such format is orthogonal frequency division multiplexing (OFDM), which has seen great success in wireless and wired RF applications, and is being actively investigated by several research groups for use in optical fiber telecom. In this dissertation, I present three implementations of OFDM for elastic optical networking and distributed network control. The first is a field programmable gate array (FPGA) based real-time implementation of a version of OFDM conventionally known as intensity modulation and direct detection (IMDD) OFDM. I experimentally demonstrate the ability of this transmission system to dynamically adjust bandwidth and modulation format to meet networking constraints in an automated manner. To the best of my knowledge, this is the first real-time software defined networking (SDN) based control of an OFDM system. In the second OFDM implementation, I experimentally demonstrate a novel OFDM transmission scheme that supports both direct detection and coherent detection receivers simultaneously using the same OFDM transmitter. This interchangeable receiver solution enables a trade-off between bit rate and equipment cost in network deployment and upgrades. I show that the proposed transmission scheme can provide a receiver sensitivity improvement of up to 1.73 dB as compared to IMDD OFDM. I also present two novel polarization analyzer based detection schemes, and study their performance using experiment and simulation. In the third implementation, I present an OFDM pilot-tone based scheme for distributed network control. The first instance of an SDN-based OFDM elastic optical network with pilot-tone assisted distributed control is demonstrated. An improvement in spectral efficiency and a fast reconfiguration time of 30 ms have been achieved in this experiment. Finally, I experimentally demonstrate optical re-timing of a 10.7 Gb/s data stream utilizing the property of bound soliton pairs (or "soliton molecules") to relax to an equilibrium temporal separation after propagation through a nonlinear dispersion alternating fiber span. Pulses offset up to 16 ps from bit center are successfully re-timed. The optical re-timing scheme studied here is a good example of signal processing in the optical domain and such a technique can overcome the bandwidth bottleneck present in DSP. An enhanced version of this re-timing scheme is analyzed using numerical simulations.

  5. Scalable quantum information processing with atomic ensembles and flying photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei Feng; Yu Yafei; Feng Mang

    2009-10-15

    We present a scheme for scalable quantum information processing with atomic ensembles and flying photons. Using the Rydberg blockade, we encode the qubits in the collective atomic states, which could be manipulated fast and easily due to the enhanced interaction in comparison to the single-atom case. We demonstrate that our proposed gating could be applied to generation of two-dimensional cluster states for measurement-based quantum computation. Moreover, the atomic ensembles also function as quantum repeaters useful for long-distance quantum state transfer. We show the possibility of our scheme to work in bad cavity or in weak coupling regime, which could muchmore » relax the experimental requirement. The efficient coherent operations on the ensemble qubits enable our scheme to be switchable between quantum computation and quantum communication using atomic ensembles.« less

  6. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    PubMed

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-08

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  7. Protection of Renewable-dominated Microgrids: Challenges and Potential Solutions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elkhatib, Mohamed; Ellis, Abraham; Milan Biswal

    keywords : Microgrid Protection, Impedance Relay, Signal Processing-based Fault Detec- tion, Networked Microgrids, Communication-Assisted Protection In this report we address the challenge of designing efficient protection system for inverter- dominated microgrids. These microgrids are characterised with limited fault current capacity as a result of current-limiting protection functions of inverters. Typically, inverters limit their fault contribution in sub-cycle time frame to as low as 1.1 per unit. As a result, overcurrent protection could fail completely to detect faults in inverter-dominated microgrids. As part of this project a detailed literature survey of existing and proposed microgrid protection schemes were conducted. The surveymore » concluded that there is a gap in the available microgrid protection methods. The only credible protection solution available in literature for low- fault inverter-dominated microgrids is the differential protection scheme which represents a robust transmission-grade protection solution but at a very high cost. Two non-overcurrent protection schemes were investigated as part of this project; impedance-based protection and transient-based protection. Impedance-based protection depends on monitoring impedance trajectories at feeder relays to detect faults. Two communication-based impedance-based protection schemes were developed. the first scheme utilizes directional elements and pilot signals to locate the fault. The second scheme depends on a Central Protection Unit that communicates with all feeder relays to locate the fault based on directional flags received from feeder relays. The later approach could potentially be adapted to protect networked microgrids and dynamic topology microgrids. Transient-based protection relies on analyzing high frequency transients to detect and locate faults. This approach is very promising but its implementation in the filed faces several challenges. For example, high frequency transients due to faults can be confused with transients due to other events such as capacitor switching. Additionally, while detecting faults by analyzing transients could be doable, locating faults based on analyzing transients is still an open question.« less

  8. OS2: Oblivious similarity based searching for encrypted data outsourced to an untrusted domain

    PubMed Central

    Pervez, Zeeshan; Ahmad, Mahmood; Khattak, Asad Masood; Ramzan, Naeem

    2017-01-01

    Public cloud storage services are becoming prevalent and myriad data sharing, archiving and collaborative services have emerged which harness the pay-as-you-go business model of public cloud. To ensure privacy and confidentiality often encrypted data is outsourced to such services, which further complicates the process of accessing relevant data by using search queries. Search over encrypted data schemes solve this problem by exploiting cryptographic primitives and secure indexing to identify outsourced data that satisfy the search criteria. Almost all of these schemes rely on exact matching between the encrypted data and search criteria. A few schemes which extend the notion of exact matching to similarity based search, lack realism as those schemes rely on trusted third parties or due to increase storage and computational complexity. In this paper we propose Oblivious Similarity based Search (OS2) for encrypted data. It enables authorized users to model their own encrypted search queries which are resilient to typographical errors. Unlike conventional methodologies, OS2 ranks the search results by using similarity measure offering a better search experience than exact matching. It utilizes encrypted bloom filter and probabilistic homomorphic encryption to enable authorized users to access relevant data without revealing results of search query evaluation process to the untrusted cloud service provider. Encrypted bloom filter based search enables OS2 to reduce search space to potentially relevant encrypted data avoiding unnecessary computation on public cloud. The efficacy of OS2 is evaluated on Google App Engine for various bloom filter lengths on different cloud configurations. PMID:28692697

  9. [Formula: see text]: Oblivious similarity based searching for encrypted data outsourced to an untrusted domain.

    PubMed

    Pervez, Zeeshan; Ahmad, Mahmood; Khattak, Asad Masood; Ramzan, Naeem; Khan, Wajahat Ali

    2017-01-01

    Public cloud storage services are becoming prevalent and myriad data sharing, archiving and collaborative services have emerged which harness the pay-as-you-go business model of public cloud. To ensure privacy and confidentiality often encrypted data is outsourced to such services, which further complicates the process of accessing relevant data by using search queries. Search over encrypted data schemes solve this problem by exploiting cryptographic primitives and secure indexing to identify outsourced data that satisfy the search criteria. Almost all of these schemes rely on exact matching between the encrypted data and search criteria. A few schemes which extend the notion of exact matching to similarity based search, lack realism as those schemes rely on trusted third parties or due to increase storage and computational complexity. In this paper we propose Oblivious Similarity based Search ([Formula: see text]) for encrypted data. It enables authorized users to model their own encrypted search queries which are resilient to typographical errors. Unlike conventional methodologies, [Formula: see text] ranks the search results by using similarity measure offering a better search experience than exact matching. It utilizes encrypted bloom filter and probabilistic homomorphic encryption to enable authorized users to access relevant data without revealing results of search query evaluation process to the untrusted cloud service provider. Encrypted bloom filter based search enables [Formula: see text] to reduce search space to potentially relevant encrypted data avoiding unnecessary computation on public cloud. The efficacy of [Formula: see text] is evaluated on Google App Engine for various bloom filter lengths on different cloud configurations.

  10. Reduction of catastrophic health care expenditures by a community-based health insurance scheme in Gujarat, India: current experiences and challenges.

    PubMed Central

    Ranson, Michael Kent

    2002-01-01

    OBJECTIVE: To assess the Self Employed Women's Association's Medical Insurance Fund in Gujarat in terms of insurance coverage according to income groups, protection of claimants from costs of hospitalization, time between discharge and reimbursement, and frequency of use. METHODS: One thousand nine hundred and thirty claims submitted over six years were analysed. FINDINGS: Two hundred and fifteen (11%) of 1927 claims were rejected. The mean household income of claimants was significantly lower than that of the general population. The percentage of households below the poverty line was similar for claimants and the general population. One thousand seven hundred and twelve (1712) claims were reimbursed: 805 (47%) fully and 907 (53%) at a mean reimbursement rate of 55.6%. Reimbursement more than halved the percentage of catastrophic hospitalizations (>10% of annual household income) and hospitalizations resulting in impoverishment. The average time between discharge and reimbursement was four months. The frequency of submission of claims was low (18.0/1000 members per year: 22-37% of the estimated frequency of hospitalization). CONCLUSIONS: The findings have implications for community-based health insurance schemes in India and elsewhere. Such schemes can protect poor households against the uncertain risk of medical expenses. They can be implemented in areas where institutional capacity is too weak to organize nationwide risk-pooling. Such schemes can cover poor people, including people and households below the poverty line. A trade off exists between maintaining the scheme's financial viability and protecting members against catastrophic expenditures. To facilitate reimbursement, administration, particularly processing of claims, should happen near claimants. Fine-tuning the design of a scheme is an ongoing process - a system of monitoring and evaluation is vital. PMID:12219151

  11. Reduction of catastrophic health care expenditures by a community-based health insurance scheme in Gujarat, India: current experiences and challenges.

    PubMed

    Ranson, Michael Kent

    2002-01-01

    To assess the Self Employed Women's Association's Medical Insurance Fund in Gujarat in terms of insurance coverage according to income groups, protection of claimants from costs of hospitalization, time between discharge and reimbursement, and frequency of use. One thousand nine hundred and thirty claims submitted over six years were analysed. Two hundred and fifteen (11%) of 1927 claims were rejected. The mean household income of claimants was significantly lower than that of the general population. The percentage of households below the poverty line was similar for claimants and the general population. One thousand seven hundred and twelve (1712) claims were reimbursed: 805 (47%) fully and 907 (53%) at a mean reimbursement rate of 55.6%. Reimbursement more than halved the percentage of catastrophic hospitalizations (>10% of annual household income) and hospitalizations resulting in impoverishment. The average time between discharge and reimbursement was four months. The frequency of submission of claims was low (18.0/1000 members per year: 22-37% of the estimated frequency of hospitalization). The findings have implications for community-based health insurance schemes in India and elsewhere. Such schemes can protect poor households against the uncertain risk of medical expenses. They can be implemented in areas where institutional capacity is too weak to organize nationwide risk-pooling. Such schemes can cover poor people, including people and households below the poverty line. A trade off exists between maintaining the scheme's financial viability and protecting members against catastrophic expenditures. To facilitate reimbursement, administration, particularly processing of claims, should happen near claimants. Fine-tuning the design of a scheme is an ongoing process - a system of monitoring and evaluation is vital.

  12. Automated recognition of helium speech. Phase I: Investigation of microprocessor based analysis/synthesis system

    NASA Astrophysics Data System (ADS)

    Jelinek, H. J.

    1986-01-01

    This is the Final Report of Electronic Design Associates on its Phase I SBIR project. The purpose of this project is to develop a method for correcting helium speech, as experienced in diver-surface communication. The goal of the Phase I study was to design, prototype, and evaluate a real time helium speech corrector system based upon digital signal processing techniques. The general approach was to develop hardware (an IBM PC board) to digitize helium speech and software (a LAMBDA computer based simulation) to translate the speech. As planned in the study proposal, this initial prototype may now be used to assess expected performance from a self contained real time system which uses an identical algorithm. The Final Report details the work carried out to produce the prototype system. Four major project tasks were: a signal processing scheme for converting helium speech to normal sounding speech was generated. The signal processing scheme was simulated on a general purpose (LAMDA) computer. Actual helium speech was supplied to the simulation and the converted speech was generated. An IBM-PC based 14 bit data Input/Output board was designed and built. A bibliography of references on speech processing was generated.

  13. The Portsmouth-based glaucoma refinement scheme: a role for virtual clinics in the future?

    PubMed Central

    Trikha, S; Macgregor, C; Jeffery, M; Kirwan, J

    2012-01-01

    Background Glaucoma referrals continue to impart a significant burden on Hospital Eye Services (HES), with a large proportion of these false positives. Aims To evaluate the Portsmouth glaucoma scheme, utilising virtual clinics, digital technology, and community optometrists to streamline glaucoma referrals. Method The stages of the patient trail were mapped and, at each step of the process, 100 consecutive patient decisions were identified. The diagnostic outcomes of 50 consecutive patients referred from the refinement scheme to the HES were identified. Results A total of 76% of ‘glaucoma' referrals were suitable for the refinement scheme. Overall, 94% of disc images were gradeable in the virtual clinic. In all, 11% of patients ‘attending' the virtual clinic were accepted into HES, with 89% being discharged for community follow-up. Of referrals accepted into HES, the positive predictive value (glaucoma/ocular hypertension/suspect) was 0.78 vs 0.37 in the predating ‘unrefined' scheme (95% CI 0.65–0.87). The scheme has released 1400 clinic slots/year for HES, and has produced a £244 200/year cost saving for Portsmouth Hospitals' Trust. Conclusion The refinement scheme is streamlining referrals and increasing the positive predictive rate in the diagnosis of glaucoma, glaucoma suspect or ocular hypertension. This consultant-led practice-based commissioning scheme, if adopted widely, is likely to incur a significant cost saving while maintaining high quality of care within the NHS. PMID:22766539

  14. A new processing scheme for ultra-high resolution direct infusion mass spectrometry data

    NASA Astrophysics Data System (ADS)

    Zielinski, Arthur T.; Kourtchev, Ivan; Bortolini, Claudio; Fuller, Stephen J.; Giorio, Chiara; Popoola, Olalekan A. M.; Bogialli, Sara; Tapparo, Andrea; Jones, Roderic L.; Kalberer, Markus

    2018-04-01

    High resolution, high accuracy mass spectrometry is widely used to characterise environmental or biological samples with highly complex composition enabling the identification of chemical composition of often unknown compounds. Despite instrumental advancements, the accurate molecular assignment of compounds acquired in high resolution mass spectra remains time consuming and requires automated algorithms, especially for samples covering a wide mass range and large numbers of compounds. A new processing scheme is introduced implementing filtering methods based on element assignment, instrumental error, and blank subtraction. Optional post-processing incorporates common ion selection across replicate measurements and shoulder ion removal. The scheme allows both positive and negative direct infusion electrospray ionisation (ESI) and atmospheric pressure photoionisation (APPI) acquisition with the same programs. An example application to atmospheric organic aerosol samples using an Orbitrap mass spectrometer is reported for both ionisation techniques resulting in final spectra with 0.8% and 8.4% of the peaks retained from the raw spectra for APPI positive and ESI negative acquisition, respectively.

  15. Development of a discrete gas-kinetic scheme for simulation of two-dimensional viscous incompressible and compressible flows.

    PubMed

    Yang, L M; Shu, C; Wang, Y

    2016-03-01

    In this work, a discrete gas-kinetic scheme (DGKS) is presented for simulation of two-dimensional viscous incompressible and compressible flows. This scheme is developed from the circular function-based GKS, which was recently proposed by Shu and his co-workers [L. M. Yang, C. Shu, and J. Wu, J. Comput. Phys. 274, 611 (2014)]. For the circular function-based GKS, the integrals for conservation forms of moments in the infinity domain for the Maxwellian function-based GKS are simplified to those integrals along the circle. As a result, the explicit formulations of conservative variables and fluxes are derived. However, these explicit formulations of circular function-based GKS for viscous flows are still complicated, which may not be easy for the application by new users. By using certain discrete points to represent the circle in the phase velocity space, the complicated formulations can be replaced by a simple solution process. The basic requirement is that the conservation forms of moments for the circular function-based GKS can be accurately satisfied by weighted summation of distribution functions at discrete points. In this work, it is shown that integral quadrature by four discrete points on the circle, which forms the D2Q4 discrete velocity model, can exactly match the integrals. Numerical results showed that the present scheme can provide accurate numerical results for incompressible and compressible viscous flows with roughly the same computational cost as that needed by the Roe scheme.

  16. A molecular logic gate.

    PubMed

    Kompa, K L; Levine, R D

    2001-01-16

    We propose a scheme for molecule-based information processing by combining well-studied spectroscopic techniques and recent results from chemical dynamics. Specifically it is discussed how optical transitions in single molecules can be used to rapidly perform classical (Boolean) logical operations. In the proposed way, a restricted number of states in a single molecule can act as a logical gate equivalent to at least two switches. It is argued that the four-level scheme can also be used to produce gain, because it allows an inversion, and not only a switching ability. The proposed scheme is quantum mechanical in that it takes advantage of the discrete nature of the energy levels but, we here discuss the temporal evolution, with the use of the populations only. On a longer time range we suggest that the same scheme could be extended to perform quantum logic, and a tentative suggestion, based on an available experiment, is discussed. We believe that the pumping can provide a partial proof of principle, although this and similar experiments were not interpreted thus far in our terms.

  17. Peg-in-Hole Assembly Based on Two-phase Scheme and F/T Sensor for Dual-arm Robot

    PubMed Central

    Zhang, Xianmin; Zheng, Yanglong; Ota, Jun; Huang, Yanjiang

    2017-01-01

    This paper focuses on peg-in-hole assembly based on a two-phase scheme and force/torque sensor (F/T sensor) for a compliant dual-arm robot, the Baxter robot. The coordinated operations of human beings in assembly applications are applied to the behaviors of the robot. A two-phase assembly scheme is proposed to overcome the inaccurate positioning of the compliant dual-arm robot. The position and orientation of assembly pieces are adjusted respectively in an active compliant manner according to the forces and torques derived by a six degrees-of-freedom (6-DOF) F/T sensor. Experiments are conducted to verify the effectiveness and efficiency of the proposed assembly scheme. The performances of the dual-arm robot are consistent with those of human beings in the peg-in-hole assembly process. The peg and hole with 0.5 mm clearance for round pieces and square pieces can be assembled successfully. PMID:28862691

  18. Peg-in-Hole Assembly Based on Two-phase Scheme and F/T Sensor for Dual-arm Robot.

    PubMed

    Zhang, Xianmin; Zheng, Yanglong; Ota, Jun; Huang, Yanjiang

    2017-09-01

    This paper focuses on peg-in-hole assembly based on a two-phase scheme and force/torque sensor (F/T sensor) for a compliant dual-arm robot, the Baxter robot. The coordinated operations of human beings in assembly applications are applied to the behaviors of the robot. A two-phase assembly scheme is proposed to overcome the inaccurate positioning of the compliant dual-arm robot. The position and orientation of assembly pieces are adjusted respectively in an active compliant manner according to the forces and torques derived by a six degrees-of-freedom (6-DOF) F/T sensor. Experiments are conducted to verify the effectiveness and efficiency of the proposed assembly scheme. The performances of the dual-arm robot are consistent with those of human beings in the peg-in-hole assembly process. The peg and hole with 0.5 mm clearance for round pieces and square pieces can be assembled successfully.

  19. Fault Diagnosis for Centre Wear Fault of Roll Grinder Based on a Resonance Demodulation Scheme

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Shao, Yimin; Yin, Lei; Yuan, Yilin; Liu, Jing

    2017-05-01

    Roll grinder is one of the important parts in the rolling machinery, and the grinding precision of roll surface has direct influence on the surface quality of steel strip. However, during the grinding process, the centre bears the gravity of the roll and alternating stress. Therefore, wear or spalling faults are easily observed on the centre, which will lead to an anomalous vibration of the roll grinder. In this study, a resonance demodulation scheme is proposed to detect the centre wear fault of roll grinder. Firstly, fast kurtogram method is employed to help select the sub-band filter parameters for optimal resonance demodulation. Further, the envelope spectrum are derived based on the filtered signal. Finally, two health indicators are designed to conduct the fault diagnosis for centre wear fault. The proposed scheme is assessed by analysing experimental data from a roll grinder of twenty-high rolling mill. The results show that the proposed scheme can effectively detect the centre wear fault of the roll grinder.

  20. Schemes generating entangled states and entanglement swapping between photons and three-level atoms inside optical cavities for quantum communication

    NASA Astrophysics Data System (ADS)

    Heo, Jino; Kang, Min-Sung; Hong, Chang-Ho; Yang, Hyeon; Choi, Seong-Gon

    2017-01-01

    We propose quantum information processing schemes based on cavity quantum electrodynamics (QED) for quantum communication. First, to generate entangled states (Bell and Greenberger-Horne-Zeilinger [GHZ] states) between flying photons and three-level atoms inside optical cavities, we utilize a controlled phase flip (CPF) gate that can be implemented via cavity QED). Subsequently, we present an entanglement swapping scheme that can be realized using single-qubit measurements and CPF gates via optical cavities. These schemes can be directly applied to construct an entanglement channel for a communication system between two users. Consequently, it is possible for the trust center, having quantum nodes, to accomplish the linked channel (entanglement channel) between the two separate long-distance users via the distribution of Bell states and entanglement swapping. Furthermore, in our schemes, the main physical component is the CPF gate between the photons and the three-level atoms in cavity QED, which is feasible in practice. Thus, our schemes can be experimentally realized with current technology.

  1. Design and Smartphone-Based Implementation of a Chaotic Video Communication Scheme via WAN Remote Transmission

    NASA Astrophysics Data System (ADS)

    Lin, Zhuosheng; Yu, Simin; Li, Chengqing; Lü, Jinhu; Wang, Qianxue

    This paper proposes a chaotic secure video remote communication scheme that can perform on real WAN networks, and implements it on a smartphone hardware platform. First, a joint encryption and compression scheme is designed by embedding a chaotic encryption scheme into the MJPG-Streamer source codes. Then, multiuser smartphone communications between the sender and the receiver are implemented via WAN remote transmission. Finally, the transmitted video data are received with the given IP address and port in an Android smartphone. It should be noted that, this is the first time that chaotic video encryption schemes are implemented on such a hardware platform. The experimental results demonstrate that the technical challenges on hardware implementation of secure video communication are successfully solved, reaching a balance amongst sufficient security level, real-time processing of massive video data, and utilization of available resources in the hardware environment. The proposed scheme can serve as a good application example of chaotic secure communications for smartphone and other mobile facilities in the future.

  2. Semi-quantum Secure Direct Communication Scheme Based on Bell States

    NASA Astrophysics Data System (ADS)

    Xie, Chen; Li, Lvzhou; Situ, Haozhen; He, Jianhao

    2018-06-01

    Recently, the idea of semi-quantumness has been often used in designing quantum cryptographic schemes, which allows some of the participants of a quantum cryptographic scheme to remain classical. One of the reasons why this idea is popular is that it allows a quantum information processing task to be accomplished by using quantum resources as few as possible. In this paper, we extend the idea to quantum secure direct communication(QSDC) by proposing a semi-quantum secure direct communication scheme. In the scheme, the message sender, Alice, encodes each bit into a Bell state |φ+> = 1/{√2}(|00> +|11> ) or |{Ψ }+> = 1/{√ 2}(|01> +|10> ), and the message receiver, Bob, who is classical in the sense that he can either let the qubit he received reflect undisturbed, or measure the qubit in the computational basis |0>, |1> and then resend it in the state he found. Moreover, the security analysis of our scheme is also given.

  3. LES of Temporally Evolving Mixing Layers by Three High Order Schemes

    NASA Astrophysics Data System (ADS)

    Yee, H.; Sjögreen, B.; Hadjadj, A.

    2011-10-01

    The performance of three high order shock-capturing schemes is compared for large eddy simulations (LES) of temporally evolving mixing layers for different convective Mach number (Mc) ranging from the quasi-incompressible regime to highly compressible supersonic regime. The considered high order schemes are fifth-order WENO (WENO5), seventh-order WENO (WENO7), and the associated eighth-order central spatial base scheme with the dissipative portion of WENO7 as a nonlinear post-processing filter step (WENO7fi). This high order nonlinear filter method (Yee & Sjögreen 2009) is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The LES results by WENO7fi using the same scheme parameter agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) by Rogers & Moser (1994) and Pantano & Sarkar (2002), whereas results by WENO5 and WENO7 compare poorly with experimental data and DNS computations.

  4. Advanced digital signal processing for short-haul and access network

    NASA Astrophysics Data System (ADS)

    Zhang, Junwen; Yu, Jianjun; Chi, Nan

    2016-02-01

    Digital signal processing (DSP) has been proved to be a successful technology recently in high speed and high spectrum-efficiency optical short-haul and access network, which enables high performances based on digital equalizations and compensations. In this paper, we investigate advanced DSP at the transmitter and receiver side for signal pre-equalization and post-equalization in an optical access network. A novel DSP-based digital and optical pre-equalization scheme has been proposed for bandwidth-limited high speed short-distance communication system, which is based on the feedback of receiver-side adaptive equalizers, such as least-mean-squares (LMS) algorithm and constant or multi-modulus algorithms (CMA, MMA). Based on this scheme, we experimentally demonstrate 400GE on a single optical carrier based on the highest ETDM 120-GBaud PDM-PAM-4 signal, using one external modulator and coherent detection. A line rate of 480-Gb/s is achieved, which enables 20% forward-error correction (FEC) overhead to keep the 400-Gb/s net information rate. The performance after fiber transmission shows large margin for both short range and metro/regional networks. We also extend the advanced DSP for short haul optical access networks by using high order QAMs. We propose and demonstrate a high speed multi-band CAP-WDM-PON system on intensity modulation, direct detection and digital equalizations. A hybrid modified cascaded MMA post-equalization schemes are used to equalize the multi-band CAP-mQAM signals. Using this scheme, we successfully demonstrates 550Gb/s high capacity WDMPON system with 11 WDM channels, 55 sub-bands, and 10-Gb/s per user in the downstream over 40-km SMF.

  5. Study on construction technology of metro tunnel under a glass curtain wall

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Yu, Deqiang

    2018-03-01

    To ensure the safety of the glass curtain wall building above loess tunnel and get an optimal scheme, an elastic-plastic FEM model is established to simulate three reinforcement schemes based on a tunnel section in Xi’an Metro Line 3. The results show that the settlement value of the optimal scheme is reduced by 69.89% compared with the drainage measures, and the uneven settlement value is reduced by 57.5%. The construction points, technical processes and technical indexes of the optimal scheme are introduced. According to the actual project, the cumulative settlement of the building under construction is 16mm, which meets the control standards. According to the actual project, the cumulative settlement of the glass curtain wall building is 16mm, which meets the control standards. The reinforcement scheme can provide some reference for the design and construction of the metro in loess area.

  6. Medical image enhancement using resolution synthesis

    NASA Astrophysics Data System (ADS)

    Wong, Tak-Shing; Bouman, Charles A.; Thibault, Jean-Baptiste; Sauer, Ken D.

    2011-03-01

    We introduce a post-processing approach to improve the quality of CT reconstructed images. The scheme is adapted from the resolution-synthesis (RS)1 interpolation algorithm. In this approach, we consider the input image, scanned at a particular dose level, as a degraded version of a high quality image scanned at a high dose level. Image enhancement is achieved by predicting the high quality image by classification based linear regression. To improve the robustness of our scheme, we also apply the minimum description length principle to determine the optimal number of predictors to use in the scheme, and the ridge regression to regularize the design of the predictors. Experimental results show that our scheme is effective in reducing the noise in images reconstructed from filtered back projection without significant loss of image details. Alternatively, our scheme can also be applied to reduce dose while maintaining image quality at an acceptable level.

  7. Design of an image encryption scheme based on a multiple chaotic map

    NASA Astrophysics Data System (ADS)

    Tong, Xiao-Jun

    2013-07-01

    In order to solve the problem that chaos is degenerated in limited computer precision and Cat map is the small key space, this paper presents a chaotic map based on topological conjugacy and the chaotic characteristics are proved by Devaney definition. In order to produce a large key space, a Cat map named block Cat map is also designed for permutation process based on multiple-dimensional chaotic maps. The image encryption algorithm is based on permutation-substitution, and each key is controlled by different chaotic maps. The entropy analysis, differential analysis, weak-keys analysis, statistical analysis, cipher random analysis, and cipher sensibility analysis depending on key and plaintext are introduced to test the security of the new image encryption scheme. Through the comparison to the proposed scheme with AES, DES and Logistic encryption methods, we come to the conclusion that the image encryption method solves the problem of low precision of one dimensional chaotic function and has higher speed and higher security.

  8. A planning support system to optimize approval of private housing development projects

    NASA Astrophysics Data System (ADS)

    Hussnain, M. Q.; Wakil, K.; Waheed, A.; Tahir, A.

    2016-06-01

    Out of 182 million population of Pakistan, 38% reside in urban areas having an average growth rate of 1.6%, raising the urban housing demand significantly. Poor state response to fulfil the housing needs has resulted in a mushroom growth of private housing schemes (PHS) over the years. Consequently, only in five major cities of Punjab, there are 383 legal and 150 illegal private housing development projects against 120 public sector housing schemes. A major factor behind the cancerous growth of unapproved PHS is the prolonged and delayed approval process in concerned approval authorities requiring 13 months on average. Currently, manual and paper-based approaches are used for vetting and for granting the permission which is highly subjective and non-transparent. This study aims to design a flexible planning support system (PSS) to optimize the vetting process of PHS projects under any development authority in Pakistan by reducing time and cost required for site and documents investigations. Relying on the review of regulatory documents and interviews with professional planners and land developers, this study describes the structure of a PSS developed using open- source geo-spatial tools such as OpenGeo Suite, PHP, and PostgreSQL. It highlights the development of a Knowledge Module (based on regulatory documents) containing equations related to scheme type, size (area), location, access road, components of layout plan, planning standards and other related approval checks. Furthermore, it presents the architecture of the database module and system data requirements categorized as base datasets (built-in part of PSS) and input datasets (related to the housing project under approval). It is practically demonstrated that developing a customized PSS to optimize PHS approval process in Pakistan is achievable with geospatial technology. With the provision of such a system, the approval process for private housing schemes not only becomes quicker and user-friendly but also transparent.

  9. New KF-PP-SVM classification method for EEG in brain-computer interfaces.

    PubMed

    Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian

    2014-01-01

    Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.

  10. Quasi-disjoint pentadiagonal matrix systems for the parallelization of compact finite-difference schemes and filters

    NASA Astrophysics Data System (ADS)

    Kim, Jae Wook

    2013-05-01

    This paper proposes a novel systematic approach for the parallelization of pentadiagonal compact finite-difference schemes and filters based on domain decomposition. The proposed approach allows a pentadiagonal banded matrix system to be split into quasi-disjoint subsystems by using a linear-algebraic transformation technique. As a result the inversion of pentadiagonal matrices can be implemented within each subdomain in an independent manner subject to a conventional halo-exchange process. The proposed matrix transformation leads to new subdomain boundary (SB) compact schemes and filters that require three halo terms to exchange with neighboring subdomains. The internode communication overhead in the present approach is equivalent to that of standard explicit schemes and filters based on seven-point discretization stencils. The new SB compact schemes and filters demand additional arithmetic operations compared to the original serial ones. However, it is shown that the additional cost becomes sufficiently low by choosing optimal sizes of their discretization stencils. Compared to earlier published results, the proposed SB compact schemes and filters successfully reduce parallelization artifacts arising from subdomain boundaries to a level sufficiently negligible for sophisticated aeroacoustic simulations without degrading parallel efficiency. The overall performance and parallel efficiency of the proposed approach are demonstrated by stringent benchmark tests.

  11. Luminescent sensing and imaging of oxygen: Fierce competition to the Clark electrode

    PubMed Central

    2015-01-01

    Luminescence‐based sensing schemes for oxygen have experienced a fast growth and are in the process of replacing the Clark electrode in many fields. Unlike electrodes, sensing is not limited to point measurements via fiber optic microsensors, but includes additional features such as planar sensing, imaging, and intracellular assays using nanosized sensor particles. In this essay, I review and discuss the essentials of (i) common solid‐state sensor approaches based on the use of luminescent indicator dyes and host polymers; (ii) fiber optic and planar sensing schemes; (iii) nanoparticle‐based intracellular sensing; and (iv) common spectroscopies. Optical sensors are also capable of multiple simultaneous sensing (such as O2 and temperature). Sensors for O2 are produced nowadays in large quantities in industry. Fields of application include sensing of O2 in plant and animal physiology, in clinical chemistry, in marine sciences, in the chemical industry and in process biotechnology. PMID:26113255

  12. A microcomputer based frequency-domain processor for laser Doppler anemometry

    NASA Technical Reports Server (NTRS)

    Horne, W. Clifton; Adair, Desmond

    1988-01-01

    A prototype multi-channel laser Doppler anemometry (LDA) processor was assembled using a wideband transient recorder and a microcomputer with an array processor for fast Fourier transform (FFT) computations. The prototype instrument was used to acquire, process, and record signals from a three-component wind tunnel LDA system subject to various conditions of noise and flow turbulence. The recorded data was used to evaluate the effectiveness of burst acceptance criteria, processing algorithms, and selection of processing parameters such as record length. The recorded signals were also used to obtain comparative estimates of signal-to-noise ratio between time-domain and frequency-domain signal detection schemes. These comparisons show that the FFT processing scheme allows accurate processing of signals for which the signal-to-noise ratio is 10 to 15 dB less than is practical using counter processors.

  13. Modelling Detailed-Chemistry Effects on Turbulent Diffusion Flames using a Parallel Solution-Adaptive Scheme

    NASA Astrophysics Data System (ADS)

    Jha, Pradeep Kumar

    Capturing the effects of detailed-chemistry on turbulent combustion processes is a central challenge faced by the numerical combustion community. However, the inherent complexity and non-linear nature of both turbulence and chemistry require that combustion models rely heavily on engineering approximations to remain computationally tractable. This thesis proposes a computationally efficient algorithm for modelling detailed-chemistry effects in turbulent diffusion flames and numerically predicting the associated flame properties. The cornerstone of this combustion modelling tool is the use of parallel Adaptive Mesh Refinement (AMR) scheme with the recently proposed Flame Prolongation of Intrinsic low-dimensional manifold (FPI) tabulated-chemistry approach for modelling complex chemistry. The effect of turbulence on the mean chemistry is incorporated using a Presumed Conditional Moment (PCM) approach based on a beta-probability density function (PDF). The two-equation k-w turbulence model is used for modelling the effects of the unresolved turbulence on the mean flow field. The finite-rate of methane-air combustion is represented here by using the GRI-Mech 3.0 scheme. This detailed mechanism is used to build the FPI tables. A state of the art numerical scheme based on a parallel block-based solution-adaptive algorithm has been developed to solve the Favre-averaged Navier-Stokes (FANS) and other governing partial-differential equations using a second-order accurate, fully-coupled finite-volume formulation on body-fitted, multi-block, quadrilateral/hexahedral mesh for two-dimensional and three-dimensional flow geometries, respectively. A standard fourth-order Runge-Kutta time-marching scheme is used for time-accurate temporal discretizations. Numerical predictions of three different diffusion flames configurations are considered in the present work: a laminar counter-flow flame; a laminar co-flow diffusion flame; and a Sydney bluff-body turbulent reacting flow. Comparisons are made between the predicted results of the present FPI scheme and Steady Laminar Flamelet Model (SLFM) approach for diffusion flames. The effects of grid resolution on the predicted overall flame solutions are also assessed. Other non-reacting flows have also been considered to further validate other aspects of the numerical scheme. The present schemes predict results which are in good agreement with published experimental results and reduces the computational cost involved in modelling turbulent diffusion flames significantly, both in terms of storage and processing time.

  14. PIYAS-proceeding to intelligent service oriented memory allocation for flash based data centric sensor devices in wireless sensor networks.

    PubMed

    Rizvi, Sanam Shahla; Chung, Tae-Sun

    2010-01-01

    Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.

  15. Design of robust iterative learning control schemes for systems with polytopic uncertainties and sector-bounded nonlinearities

    NASA Astrophysics Data System (ADS)

    Boski, Marcin; Paszke, Wojciech

    2017-01-01

    This paper deals with designing of iterative learning control schemes for uncertain systems with static nonlinearities. More specifically, the nonlinear part is supposed to be sector bounded and system matrices are assumed to range in the polytope of matrices. For systems with such nonlinearities and uncertainties the repetitive process setting is exploited to develop a linear matrix inequality based conditions for computing the feedback and feedforward (learning) controllers. These controllers guarantee acceptable dynamics along the trials and ensure convergence of the trial-to-trial error dynamics, respectively. Numerical examples illustrate the theoretical results and confirm effectiveness of the designed control scheme.

  16. Physical Layer Secret-Key Generation Scheme for Transportation Security Sensor Network

    PubMed Central

    Yang, Bin; Zhang, Jianfeng

    2017-01-01

    Wireless Sensor Networks (WSNs) are widely used in different disciplines, including transportation systems, agriculture field environment monitoring, healthcare systems, and industrial monitoring. The security challenge of the wireless communication link between sensor nodes is critical in WSNs. In this paper, we propose a new physical layer secret-key generation scheme for transportation security sensor network. The scheme is based on the cooperation of all the sensor nodes, thus avoiding the key distribution process, which increases the security of the system. Different passive and active attack models are analyzed in this paper. We also prove that when the cooperative node number is large enough, even when the eavesdropper is equipped with multiple antennas, the secret-key is still secure. Numerical results are performed to show the efficiency of the proposed scheme. PMID:28657588

  17. Controllable high-fidelity quantum state transfer and entanglement generation in circuit QED.

    PubMed

    Xu, Peng; Yang, Xu-Chen; Mei, Feng; Xue, Zheng-Yuan

    2016-01-25

    We propose a scheme to realize controllable quantum state transfer and entanglement generation among transmon qubits in the typical circuit QED setup based on adiabatic passage. Through designing the time-dependent driven pulses applied on the transmon qubits, we find that fast quantum sate transfer can be achieved between arbitrary two qubits and quantum entanglement among the qubits also can also be engineered. Furthermore, we numerically analyzed the influence of the decoherence on our scheme with the current experimental accessible systematical parameters. The result shows that our scheme is very robust against both the cavity decay and qubit relaxation, the fidelities of the state transfer and entanglement preparation process could be very high. In addition, our scheme is also shown to be insensitive to the inhomogeneous of qubit-resonator coupling strengths.

  18. Physical Layer Secret-Key Generation Scheme for Transportation Security Sensor Network.

    PubMed

    Yang, Bin; Zhang, Jianfeng

    2017-06-28

    Wireless Sensor Networks (WSNs) are widely used in different disciplines, including transportation systems, agriculture field environment monitoring, healthcare systems, and industrial monitoring. The security challenge of the wireless communication link between sensor nodes is critical in WSNs. In this paper, we propose a new physical layer secret-key generation scheme for transportation security sensor network. The scheme is based on the cooperation of all the sensor nodes, thus avoiding the key distribution process, which increases the security of the system. Different passive and active attack models are analyzed in this paper. We also prove that when the cooperative node number is large enough, even when the eavesdropper is equipped with multiple antennas, the secret-key is still secure. Numerical results are performed to show the efficiency of the proposed scheme.

  19. Multistage variable probability forest volume inventory. [the Defiance Unit of the Navajo Nation

    NASA Technical Reports Server (NTRS)

    Anderson, J. E. (Principal Investigator)

    1979-01-01

    An inventory scheme based on the use of computer processed LANDSAT MSS data was developed. Output from the inventory scheme provides an estimate of the standing net saw timber volume of a major timber species on a selected forested area of the Navajo Nation. Such estimates are based on the values of parameters currently used for scaled sawlog conversion to mill output. The multistage variable probability sampling appears capable of producing estimates which compare favorably with those produced using conventional techniques. In addition, the reduction in time, manpower, and overall costs lend it to numerous applications.

  20. Teleportation-based quantum information processing with Majorana zero modes

    DOE PAGES

    Vijay, Sagar; Fu, Liang

    2016-12-29

    In this work, we present a measurement-based scheme for performing braiding operations on Majorana zero modes in mesoscopic superconductor islands and for detecting their non-Abelian statistics without moving or hybridizing them. In our scheme for “braiding without braiding”, the topological qubit encoded in any pair of well-separated Majorana zero modes is read out from the transmission phase shift in electron teleportation through the island in the Coulomb-blockade regime. Finally, we propose experimental setups to measure the teleportation phase shift via conductance in an electron interferometer or persistent current in a closed loop.

  1. Teleportation-based quantum information processing with Majorana zero modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vijay, Sagar; Fu, Liang

    In this work, we present a measurement-based scheme for performing braiding operations on Majorana zero modes in mesoscopic superconductor islands and for detecting their non-Abelian statistics without moving or hybridizing them. In our scheme for “braiding without braiding”, the topological qubit encoded in any pair of well-separated Majorana zero modes is read out from the transmission phase shift in electron teleportation through the island in the Coulomb-blockade regime. Finally, we propose experimental setups to measure the teleportation phase shift via conductance in an electron interferometer or persistent current in a closed loop.

  2. Adaptive implicit-explicit and parallel element-by-element iteration schemes

    NASA Technical Reports Server (NTRS)

    Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.

    1989-01-01

    Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.

  3. Information Security Scheme Based on Computational Temporal Ghost Imaging.

    PubMed

    Jiang, Shan; Wang, Yurong; Long, Tao; Meng, Xiangfeng; Yang, Xiulun; Shu, Rong; Sun, Baoqing

    2017-08-09

    An information security scheme based on computational temporal ghost imaging is proposed. A sequence of independent 2D random binary patterns are used as encryption key to multiply with the 1D data stream. The cipher text is obtained by summing the weighted encryption key. The decryption process can be realized by correlation measurement between the encrypted information and the encryption key. Due to the instinct high-level randomness of the key, the security of this method is greatly guaranteed. The feasibility of this method and robustness against both occlusion and additional noise attacks are discussed with simulation, respectively.

  4. Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Strum, R.; Stiles, D.

    In this paper, we describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. Finally, the proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry–Perot sensors for measurement of strains and vibrations.

  5. Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors

    DOE PAGES

    Liu, Y.; Strum, R.; Stiles, D.; ...

    2017-11-20

    In this paper, we describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. Finally, the proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry–Perot sensors for measurement of strains and vibrations.

  6. Low-latency optical parallel adder based on a binary decision diagram with wavelength division multiplexing scheme

    NASA Astrophysics Data System (ADS)

    Shinya, A.; Ishihara, T.; Inoue, K.; Nozaki, K.; Kita, S.; Notomi, M.

    2018-02-01

    We propose an optical parallel adder based on a binary decision diagram that can calculate simply by propagating light through electrically controlled optical pass gates. The CARRY and CARRY operations are multiplexed in one circuit by a wavelength division multiplexing scheme to reduce the number of optical elements, and only a single gate constitutes the critical path for one digit calculation. The processing time reaches picoseconds per digit when we use a 100-μm-long optical path gates, which is ten times faster than a CMOS circuit.

  7. A Secure and Efficient Scalable Secret Image Sharing Scheme with Flexible Shadow Sizes.

    PubMed

    Xie, Dong; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-01-01

    In a general (k, n) scalable secret image sharing (SSIS) scheme, the secret image is shared by n participants and any k or more than k participants have the ability to reconstruct it. The scalability means that the amount of information in the reconstructed image scales in proportion to the number of the participants. In most existing SSIS schemes, the size of each image shadow is relatively large and the dealer does not has a flexible control strategy to adjust it to meet the demand of differen applications. Besides, almost all existing SSIS schemes are not applicable under noise circumstances. To address these deficiencies, in this paper we present a novel SSIS scheme based on a brand-new technique, called compressed sensing, which has been widely used in many fields such as image processing, wireless communication and medical imaging. Our scheme has the property of flexibility, which means that the dealer can achieve a compromise between the size of each shadow and the quality of the reconstructed image. In addition, our scheme has many other advantages, including smooth scalability, noise-resilient capability, and high security. The experimental results and the comparison with similar works demonstrate the feasibility and superiority of our scheme.

  8. An Improved and Secure Anonymous Biometric-Based User Authentication with Key Agreement Scheme for the Integrated EPR Information System.

    PubMed

    Jung, Jaewook; Kang, Dongwoo; Lee, Donghoon; Won, Dongho

    2017-01-01

    Nowadays, many hospitals and medical institutes employ an authentication protocol within electronic patient records (EPR) services in order to provide protected electronic transactions in e-medicine systems. In order to establish efficient and robust health care services, numerous studies have been carried out on authentication protocols. Recently, Li et al. proposed a user authenticated key agreement scheme according to EPR information systems, arguing that their scheme is able to resist various types of attacks and preserve diverse security properties. However, this scheme possesses critical vulnerabilities. First, the scheme cannot prevent off-line password guessing attacks and server spoofing attack, and cannot preserve user identity. Second, there is no password verification process with the failure to identify the correct password at the beginning of the login phase. Third, the mechanism of password change is incompetent, in that it induces inefficient communication in communicating with the server to change a user password. Therefore, we suggest an upgraded version of the user authenticated key agreement scheme that provides enhanced security. Our security and performance analysis shows that compared to other related schemes, our scheme not only improves the security level, but also ensures efficiency.

  9. An Improved and Secure Anonymous Biometric-Based User Authentication with Key Agreement Scheme for the Integrated EPR Information System

    PubMed Central

    Kang, Dongwoo; Lee, Donghoon; Won, Dongho

    2017-01-01

    Nowadays, many hospitals and medical institutes employ an authentication protocol within electronic patient records (EPR) services in order to provide protected electronic transactions in e-medicine systems. In order to establish efficient and robust health care services, numerous studies have been carried out on authentication protocols. Recently, Li et al. proposed a user authenticated key agreement scheme according to EPR information systems, arguing that their scheme is able to resist various types of attacks and preserve diverse security properties. However, this scheme possesses critical vulnerabilities. First, the scheme cannot prevent off-line password guessing attacks and server spoofing attack, and cannot preserve user identity. Second, there is no password verification process with the failure to identify the correct password at the beginning of the login phase. Third, the mechanism of password change is incompetent, in that it induces inefficient communication in communicating with the server to change a user password. Therefore, we suggest an upgraded version of the user authenticated key agreement scheme that provides enhanced security. Our security and performance analysis shows that compared to other related schemes, our scheme not only improves the security level, but also ensures efficiency. PMID:28046075

  10. Data Quality Screening Service

    NASA Technical Reports Server (NTRS)

    Strub, Richard; Lynnes, Christopher; Hearty, Thomas; Won, Young-In; Fox, Peter; Zednik, Stephan

    2013-01-01

    A report describes the Data Quality Screening Service (DQSS), which is designed to help automate the filtering of remote sensing data on behalf of science users. Whereas this process often involves much research through quality documents followed by laborious coding, the DQSS is a Web Service that provides data users with data pre-filtered to their particular criteria, while at the same time guiding the user with filtering recommendations of the cognizant data experts. The DQSS design is based on a formal semantic Web ontology that describes data fields and the quality fields for applying quality control within a data product. The accompanying code base handles several remote sensing datasets and quality control schemes for data products stored in Hierarchical Data Format (HDF), a common format for NASA remote sensing data. Together, the ontology and code support a variety of quality control schemes through the implementation of the Boolean expression with simple, reusable conditional expressions as operands. Additional datasets are added to the DQSS simply by registering instances in the ontology if they follow a quality scheme that is already modeled in the ontology. New quality schemes are added by extending the ontology and adding code for each new scheme.

  11. Direct trust-based security scheme for RREQ flooding attack in mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Kumar, Sunil; Dutta, Kamlesh

    2017-06-01

    The routing algorithms in MANETs exhibit distributed and cooperative behaviour which makes them easy target for denial of service (DoS) attacks. RREQ flooding attack is a flooding-type DoS attack in context to Ad hoc On Demand Distance Vector (AODV) routing protocol, where the attacker broadcasts massive amount of bogus Route Request (RREQ) packets to set up the route with the non-existent or existent destination in the network. This paper presents direct trust-based security scheme to detect and mitigate the impact of RREQ flooding attack on the network, in which, every node evaluates the trust degree value of its neighbours through analysing the frequency of RREQ packets originated by them over a short period of time. Taking the node's trust degree value as the input, the proposed scheme is smoothly extended for suppressing the surplus RREQ and bogus RREQ flooding packets at one-hop neighbours during the route discovery process. This scheme distinguishes itself from existing techniques by not directly blocking the service of a normal node due to increased amount of RREQ packets in some unusual conditions. The results obtained throughout the simulation experiments clearly show the feasibility and effectiveness of the proposed defensive scheme.

  12. High-fidelity and low-latency mobile fronthaul based on segment-wise TDM and MIMO-interleaved arraying.

    PubMed

    Li, Longsheng; Bi, Meihua; Miao, Xin; Fu, Yan; Hu, Weisheng

    2018-01-22

    In this paper, we firstly demonstrate an advanced arraying scheme in the TDM-based analog mobile fronthaul system to enhance the signal fidelity, in which the segment of the antenna carrier signal (AxC) with an appropriate length is served as the granularity for TDM aggregation. Without introducing extra processing, the entire system can be realized by simple DSP. The theoretical analysis is presented to verify the feasibility of this scheme, and to evaluate its effectiveness, the experiment with ~7-GHz bandwidth and 20 8 × 8 MIMO group signals are conducted. Results show that the segment-wise TDM is completely compatible with the MIMO-interleaved arraying, which is employed in an existing TDM scheme to improve the bandwidth efficiency. Moreover, compared to the existing TDM schemes, our scheme can not only satisfy the latency requirement of 5G but also significantly reduce the multiplexed signal bandwidth, hence providing higher signal fidelity in the bandwidth-limited fronthaul system. The experimental result of EVM verifies that 256-QAM is supportable using the segment-wise TDM arraying with only 250-ns latency, while with the ordinary TDM arraying, only 64-QAM is bearable.

  13. Identification of isomers and control of ionization and dissociation processes using dual-mass-spectrometer scheme and genetic algorithm optimization

    NASA Astrophysics Data System (ADS)

    Chen, Zhou; Tong, Qiu-Nan; Zhang, Cong-Cong; Hu, Zhan

    2015-04-01

    Identification of acetone and its two isomers, and the control of their ionization and dissociation processes are performed using a dual-mass-spectrometer scheme. The scheme employs two sets of time of flight mass spectrometers to simultaneously acquire the mass spectra of two different molecules under the irradiation of identically shaped femtosecond laser pulses. The optimal laser pulses are found using closed-loop learning method based on a genetic algorithm. Compared with the mass spectra of the two isomers that are obtained with the transform limited pulse, those obtained under the irradiation of the optimal laser pulse show large differences and the various reaction pathways of the two molecules are selectively controlled. The experimental results demonstrate that the scheme is quite effective and useful in studies of two molecules having common mass peaks, which makes a traditional single mass spectrometer unfeasible. Project supported by the National Basic Research Program of China (Grant No. 2013CB922200) and the National Natural Science Foundation of China (Grant No. 11374124).

  14. Quantum secret sharing with identity authentication based on Bell states

    NASA Astrophysics Data System (ADS)

    Abulkasim, Hussein; Hamad, Safwat; Khalifa, Amal; El Bahnasy, Khalid

    Quantum secret sharing techniques allow two parties or more to securely share a key, while the same number of parties or less can efficiently deduce the secret key. In this paper, we propose an authenticated quantum secret sharing protocol, where a quantum dialogue protocol is adopted to authenticate the identity of the parties. The participants simultaneously authenticate the identity of each other based on parts of a prior shared key. Moreover, the whole prior shared key can be reused for deducing the secret data. Although the proposed scheme does not significantly improve the efficiency performance, it is more secure compared to some existing quantum secret sharing scheme due to the identity authentication process. In addition, the proposed scheme can stand against participant attack, man-in-the-middle attack, impersonation attack, Trojan-horse attack as well as information leaks.

  15. A novel evaluation method for building construction project based on integrated information entropy with reliability theory.

    PubMed

    Bai, Xiao-ping; Zhang, Xi-wei

    2013-01-01

    Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.

  16. Atomic-Scale Simulation of Electrochemical Processes at Electrode/Water Interfaces under Referenced Bias Potential.

    PubMed

    Bouzid, Assil; Pasquarello, Alfredo

    2018-04-19

    Based on constant Fermi-level molecular dynamics and a proper alignment scheme, we perform simulations of the Pt(111)/water interface under variable bias potential referenced to the standard hydrogen electrode (SHE). Our scheme yields a potential of zero charge μ pzc of ∼0.22 eV relative to the SHE and a double layer capacitance C dl of ≃19 μF cm -2 , in excellent agreement with experimental measurements. In addition, we study the structural reorganization of the electrical double layer for bias potentials ranging from -0.92 eV to +0.44 eV and find that O down configurations, which are dominant at potentials above the pzc, reorient to favor H down configurations as the measured potential becomes negative. Our modeling scheme allows one to not only access atomic-scale processes at metal/water interfaces, but also to quantitatively estimate macroscopic electrochemical quantities.

  17. A Simple Secure Hash Function Scheme Using Multiple Chaotic Maps

    NASA Astrophysics Data System (ADS)

    Ahmad, Musheer; Khurana, Shruti; Singh, Sushmita; AlSharari, Hamed D.

    2017-06-01

    The chaotic maps posses high parameter sensitivity, random-like behavior and one-way computations, which favor the construction of cryptographic hash functions. In this paper, we propose to present a novel hash function scheme which uses multiple chaotic maps to generate efficient variable-sized hash functions. The message is divided into four parts, each part is processed by a different 1D chaotic map unit yielding intermediate hash code. The four codes are concatenated to two blocks, then each block is processed through 2D chaotic map unit separately. The final hash value is generated by combining the two partial hash codes. The simulation analyses such as distribution of hashes, statistical properties of confusion and diffusion, message and key sensitivity, collision resistance and flexibility are performed. The results reveal that the proposed anticipated hash scheme is simple, efficient and holds comparable capabilities when compared with some recent chaos-based hash algorithms.

  18. Processing lunar soils for oxygen and other materials

    NASA Technical Reports Server (NTRS)

    Knudsen, Christian W.; Gibson, Michael A.

    1992-01-01

    Two types of lunar materials are excellent candidates for lunar oxygen production: ilmenite and silicates such as anorthite. Both are lunar surface minable, occurring in soils, breccias, and basalts. Because silicates are considerably more abundant than ilmenite, they may be preferred as source materials. Depending on the processing method chosen for oxygen production and the feedstock material, various useful metals and bulk materials can be produced as byproducts. Available processing techniques include hydrogen reduction of ilmenite and electrochemical and chemical reductions of silicates. Processes in these categories are generally in preliminary development stages and need significant research and development support to carry them to practical deployment, particularly as a lunar-based operation. The goal of beginning lunar processing operations by 2010 requires that planning and research and development emphasize the simplest processing schemes. However, more complex schemes that now appear to present difficult technical challenges may offer more valuable metal byproducts later. While they require more time and effort to perfect, the more complex or difficult schemes may provide important processing and product improvements with which to extend and elaborate the initial lunar processing facilities. A balanced R&D program should take this into account. The following topics are discussed: (1) ilmenite--semi-continuous process; (2) ilmenite--continuous fluid-bed reduction; (3) utilization of spent ilmenite to produce bulk materials; (4) silicates--electrochemical reduction; and (5) silicates--chemical reduction.

  19. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    PubMed Central

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  20. A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur

    2009-07-01

    For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).

  1. Book Selection, Collection Development, and Bounded Rationality.

    ERIC Educational Resources Information Center

    Schwartz, Charles A.

    1989-01-01

    Reviews previously proposed schemes of classical rationality in book selection, describes new approaches to rational choice behavior, and presents a model of book selection based on bounded rationality in a garbage can decision process. The role of tacit knowledge and symbolic content in the selection process are also discussed. (102 references)…

  2. Optical vector network analyzer based on double-sideband modulation.

    PubMed

    Jun, Wen; Wang, Ling; Yang, Chengwu; Li, Ming; Zhu, Ning Hua; Guo, Jinjin; Xiong, Liangming; Li, Wei

    2017-11-01

    We report an optical vector network analyzer (OVNA) based on double-sideband (DSB) modulation using a dual-parallel Mach-Zehnder modulator. The device under test (DUT) is measured twice with different modulation schemes. By post-processing the measurement results, the response of the DUT can be obtained accurately. Since DSB modulation is used in our approach, the measurement range is doubled compared with conventional single-sideband (SSB) modulation-based OVNA. Moreover, the measurement accuracy is improved by eliminating the even-order sidebands. The key advantage of the proposed scheme is that the measurement of a DUT with bandpass response can also be simply realized, which is a big challenge for the SSB-based OVNA. The proposed method is theoretically and experimentally demonstrated.

  3. Laser cooling of molecules by zero-velocity selection and single spontaneous emission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ooi, C. H. Raymond

    2010-11-15

    A laser-cooling scheme for molecules is presented based on repeated cycle of zero-velocity selection, deceleration, and irreversible accumulation. Although this scheme also employs a single spontaneous emission as in [Raymond Ooi, Marzlin, and Audretsch, Eur. Phys. J. D 22, 259 (2003)], in order to circumvent the difficulty of maintaining closed pumping cycles in molecules, there are two distinct features which make the cooling process of this scheme faster and more practical. First, the zero-velocity selection creates a narrow velocity-width population with zero mean velocity, such that no further deceleration (with many stimulated Raman adiabatic passage (STIRAP) pulses) is required. Second,more » only two STIRAP processes are required to decelerate the remaining hot molecular ensemble to create a finite population around zero velocity for the next cycle. We present a setup to realize the cooling process in one dimension with trapping in the other two dimensions using a Stark barrel. Numerical estimates of the cooling parameters and simulations with density matrix equations using OH molecules show the applicability of the cooling scheme. For a gas at temperature T=1 K, the estimated cooling time is only 2 ms, with phase-space density increased by about 30 times. The possibility of extension to three-dimensional cooling via thermalization is also discussed.« less

  4. The effect of individually-induced processes on image-based overlay and diffraction-based overlay

    NASA Astrophysics Data System (ADS)

    Oh, SeungHwa; Lee, Jeongjin; Lee, Seungyoon; Hwang, Chan; Choi, Gilheyun; Kang, Ho-Kyu; Jung, EunSeung

    2014-04-01

    In this paper, set of wafers with separated processes was prepared and overlay measurement result was compared in two methods; IBO and DBO. Based on the experimental result, theoretical approach of relationship between overlay mark deformation and overlay variation is presented. Moreover, overlay reading simulation was used in verification and prediction of overlay variation due to deformation of overlay mark caused by induced processes. Through this study, understanding of individual process effects on overlay measurement error is given. Additionally, guideline of selecting proper overlay measurement scheme for specific layer is presented.

  5. Implementation of density-based solver for all speeds in the framework of OpenFOAM

    NASA Astrophysics Data System (ADS)

    Shen, Chun; Sun, Fengxian; Xia, Xinlin

    2014-10-01

    In the framework of open source CFD code OpenFOAM, a density-based solver for all speeds flow field is developed. In this solver the preconditioned all speeds AUSM+(P) scheme is adopted and the dual time scheme is implemented to complete the unsteady process. Parallel computation could be implemented to accelerate the solving process. Different interface reconstruction algorithms are implemented, and their accuracy with respect to convection is compared. Three benchmark tests of lid-driven cavity flow, flow crossing over a bump, and flow over a forward-facing step are presented to show the accuracy of the AUSM+(P) solver for low-speed incompressible flow, transonic flow, and supersonic/hypersonic flow. Firstly, for the lid driven cavity flow, the computational results obtained by different interface reconstruction algorithms are compared. It is indicated that the one dimensional reconstruction scheme adopted in this solver possesses high accuracy and the solver developed in this paper can effectively catch the features of low incompressible flow. Then via the test cases regarding the flow crossing over bump and over forward step, the ability to capture characteristics of the transonic and supersonic/hypersonic flows are confirmed. The forward-facing step proves to be the most challenging for the preconditioned solvers with and without the dual time scheme. Nonetheless, the solvers described in this paper reproduce the main features of this flow, including the evolution of the initial transient.

  6. A Framework for Simulating Turbine-Based Combined-Cycle Inlet Mode-Transition

    NASA Technical Reports Server (NTRS)

    Le, Dzu K.; Vrnak, Daniel R.; Slater, John W.; Hessel, Emil O.

    2012-01-01

    A simulation framework based on the Memory-Mapped-Files technique was created to operate multiple numerical processes in locked time-steps and send I/O data synchronously across to one-another to simulate system-dynamics. This simulation scheme is currently used to study the complex interactions between inlet flow-dynamics, variable-geometry actuation mechanisms, and flow-controls in the transition from the supersonic to hypersonic conditions and vice-versa. A study of Mode-Transition Control for a high-speed inlet wind-tunnel model with this MMF-based framework is presented to illustrate this scheme and demonstrate its usefulness in simulating supersonic and hypersonic inlet dynamics and controls or other types of complex systems.

  7. Joint image encryption and compression scheme based on IWT and SPIHT

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-03-01

    A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.

  8. Integration of object-oriented knowledge representation with the CLIPS rule based system

    NASA Technical Reports Server (NTRS)

    Logie, David S.; Kamil, Hasan

    1990-01-01

    The paper describes a portion of the work aimed at developing an integrated, knowledge based environment for the development of engineering-oriented applications. An Object Representation Language (ORL) was implemented in C++ which is used to build and modify an object-oriented knowledge base. The ORL was designed in such a way so as to be easily integrated with other representation schemes that could effectively reason with the object base. Specifically, the integration of the ORL with the rule based system C Language Production Systems (CLIPS), developed at the NASA Johnson Space Center, will be discussed. The object-oriented knowledge representation provides a natural means of representing problem data as a collection of related objects. Objects are comprised of descriptive properties and interrelationships. The object-oriented model promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects. Data is inherited through an object network via the relationship links. Together, the two schemes complement each other in that the object-oriented approach efficiently handles problem data while the rule based knowledge is used to simulate the reasoning process. Alone, the object based knowledge is little more than an object-oriented data storage scheme; however, the CLIPS inference engine adds the mechanism to directly and automatically reason with that knowledge. In this hybrid scheme, the expert system dynamically queries for data and can modify the object base with complete access to all the functionality of the ORL from rules.

  9. An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks

    PubMed Central

    Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches. PMID:25574490

  10. An adaptive handover prediction scheme for seamless mobility based wireless networks.

    PubMed

    Sadiq, Ali Safa; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.

  11. A novel fractal image compression scheme with block classification and sorting based on Pearson's correlation coefficient.

    PubMed

    Wang, Jianji; Zheng, Nanning

    2013-09-01

    Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.

  12. Testing conceptual and physically based soil hydrology schemes against observations for the Amazon Basin

    NASA Astrophysics Data System (ADS)

    Guimberteau, M.; Ducharne, A.; Ciais, P.; Boisier, J. P.; Peng, S.; De Weirdt, M.; Verbeeck, H.

    2014-06-01

    This study analyzes the performance of the two soil hydrology schemes of the land surface model ORCHIDEE in estimating Amazonian hydrology and phenology for five major sub-basins (Xingu, Tapajós, Madeira, Solimões and Negro), during the 29-year period 1980-2008. A simple 2-layer scheme with a bucket topped by an evaporative layer is compared to an 11-layer diffusion scheme. The soil schemes are coupled with a river routing module and a process model of plant physiology, phenology and carbon dynamics. The simulated water budget and vegetation functioning components are compared with several data sets at sub-basin scale. The use of the 11-layer soil diffusion scheme does not significantly change the Amazonian water budget simulation when compared to the 2-layer soil scheme (+3.1 and -3.0% in evapotranspiration and river discharge, respectively). However, the higher water-holding capacity of the soil and the physically based representation of runoff and drainage in the 11-layer soil diffusion scheme result in more dynamic soil water storage variation and improved simulation of the total terrestrial water storage when compared to GRACE satellite estimates. The greater soil water storage within the 11-layer scheme also results in increased dry-season evapotranspiration (+0.5 mm d-1, +17%) and improves river discharge simulation in the southeastern sub-basins such as the Xingu. Evapotranspiration over this sub-basin is sustained during the whole dry season with the 11-layer soil diffusion scheme, whereas the 2-layer scheme limits it after only 2 dry months. Lower plant drought stress simulated by the 11-layer soil diffusion scheme leads to better simulation of the seasonal cycle of photosynthesis (GPP) when compared to a GPP data-driven model based on eddy covariance and satellite greenness measurements. A dry-season length between 4 and 7 months over the entire Amazon Basin is found to be critical in distinguishing differences in hydrological feedbacks between the soil and the vegetation cover simulated by the two soil schemes. On average, the multilayer soil diffusion scheme provides little improvement in simulated hydrology over the wet tropical Amazonian sub-basins, but a more significant improvement is found over the drier sub-basins. The use of a multilayer soil diffusion scheme might become critical for assessments of future hydrological changes, especially in southern regions of the Amazon Basin where longer dry seasons and more severe droughts are expected in the next century.

  13. Adaptive Packet Combining Scheme in Three State Channel Model

    NASA Astrophysics Data System (ADS)

    Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak

    2018-01-01

    The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.

  14. 78 FR 12135 - List of Participating Countries and Entities (Hereinafter Known as “Participants”) Under the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-21

    ... has not been controlled through the Kimberley Process Certification Scheme (KPCS). Under Section 3(2) of the Act, ``controlled through the Kimberley Process Certification Scheme'' means an importation... Kimberley Process Certification Scheme. Angola--Ministry of Geology and Mines. Armenia--Ministry of Trade...

  15. Multiple grid problems on concurrent-processing computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.

    1986-01-01

    Three computer codes were studied which make use of concurrent processing computer architectures in computational fluid dynamics (CFD). The three parallel codes were tested on a two processor multiple-instruction/multiple-data (MIMD) facility at NASA Ames Research Center, and are suggested for efficient parallel computations. The first code is a well-known program which makes use of the Beam and Warming, implicit, approximate factored algorithm. This study demonstrates the parallelism found in a well-known scheme and it achieved speedups exceeding 1.9 on the two processor MIMD test facility. The second code studied made use of an embedded grid scheme which is used to solve problems having complex geometries. The particular application for this study considered an airfoil/flap geometry in an incompressible flow. The scheme eliminates some of the inherent difficulties found in adapting approximate factorization techniques onto MIMD machines and allows the use of chaotic relaxation and asynchronous iteration techniques. The third code studied is an application of overset grids to a supersonic blunt body problem. The code addresses the difficulties encountered when using embedded grids on a compressible, and therefore nonlinear, problem. The complex numerical boundary system associated with overset grids is discussed and several boundary schemes are suggested. A boundary scheme based on the method of characteristics achieved the best results.

  16. Kinematic source inversions of teleseismic data based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.

    2014-12-01

    One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.

  17. Experimental demonstration of multi-dimensional resources integration for service provisioning in cloud radio over fiber network

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young

    2016-07-01

    Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes.

  18. Experimental demonstration of multi-dimensional resources integration for service provisioning in cloud radio over fiber network.

    PubMed

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young

    2016-07-28

    Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes.

  19. Optical noise-free image encryption based on quick response code and high dimension chaotic system in gyrator transform domain

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Xu, Minjie; Tian, Ailing

    2017-04-01

    A novel optical image encryption scheme is proposed based on quick response code and high dimension chaotic system, where only the intensity distribution of encoded information is recorded as ciphertext. Initially, the quick response code is engendered from the plain image and placed in the input plane of the double random phase encoding architecture. Then, the code is encrypted to the ciphertext with noise-like distribution by using two cascaded gyrator transforms. In the process of encryption, the parameters such as rotation angles and random phase masks are generated as interim variables and functions based on Chen system. A new phase retrieval algorithm is designed to reconstruct the initial quick response code in the process of decryption, in which a priori information such as three position detection patterns is used as the support constraint. The original image can be obtained without any energy loss by scanning the decrypted code with mobile devices. The ciphertext image is the real-valued function which is more convenient for storing and transmitting. Meanwhile, the security of the proposed scheme is enhanced greatly due to high sensitivity of initial values of Chen system. Extensive cryptanalysis and simulation have performed to demonstrate the feasibility and effectiveness of the proposed scheme.

  20. Experimental demonstration of multi-dimensional resources integration for service provisioning in cloud radio over fiber network

    PubMed Central

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young

    2016-01-01

    Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes. PMID:27465296

  1. Electricity storage using a thermal storage scheme

    NASA Astrophysics Data System (ADS)

    White, Alexander

    2015-01-01

    The increasing use of renewable energy technologies for electricity generation, many of which have an unpredictably intermittent nature, will inevitably lead to a greater demand for large-scale electricity storage schemes. For example, the expanding fraction of electricity produced by wind turbines will require either backup or storage capacity to cover extended periods of wind lull. This paper describes a recently proposed storage scheme, referred to here as Pumped Thermal Storage (PTS), and which is based on "sensible heat" storage in large thermal reservoirs. During the charging phase, the system effectively operates as a high temperature-ratio heat pump, extracting heat from a cold reservoir and delivering heat to a hot one. In the discharge phase the processes are reversed and it operates as a heat engine. The round-trip efficiency is limited only by process irreversibilities (as opposed to Second Law limitations on the coefficient of performance and the thermal efficiency of the heat pump and heat engine respectively). PTS is currently being developed in both France and England. In both cases, the schemes operate on the Joule-Brayton (gas turbine) cycle, using argon as the working fluid. However, the French scheme proposes the use of turbomachinery for compression and expansion, whereas for that being developed in England reciprocating devices are proposed. The current paper focuses on the impact of the various process irreversibilities on the thermodynamic round-trip efficiency of the scheme. Consideration is given to compression and expansion losses and pressure losses (in pipe-work, valves and thermal reservoirs); heat transfer related irreversibility in the thermal reservoirs is discussed but not included in the analysis. Results are presented demonstrating how the various loss parameters and operating conditions influence the overall performance.

  2. Using Nonlinear Stochastic Evolutionary Game Strategy to Model an Evolutionary Biological Network of Organ Carcinogenesis Under a Natural Selection Scheme

    PubMed Central

    Chen, Bor-Sen; Tsai, Kun-Wei; Li, Cheng-Wei

    2015-01-01

    Molecular biologists have long recognized carcinogenesis as an evolutionary process that involves natural selection. Cancer is driven by the somatic evolution of cell lineages. In this study, the evolution of somatic cancer cell lineages during carcinogenesis was modeled as an equilibrium point (ie, phenotype of attractor) shifting, the process of a nonlinear stochastic evolutionary biological network. This process is subject to intrinsic random fluctuations because of somatic genetic and epigenetic variations, as well as extrinsic disturbances because of carcinogens and stressors. In order to maintain the normal function (ie, phenotype) of an evolutionary biological network subjected to random intrinsic fluctuations and extrinsic disturbances, a network robustness scheme that incorporates natural selection needs to be developed. This can be accomplished by selecting certain genetic and epigenetic variations to modify the network structure to attenuate intrinsic fluctuations efficiently and to resist extrinsic disturbances in order to maintain the phenotype of the evolutionary biological network at an equilibrium point (attractor). However, during carcinogenesis, the remaining (or neutral) genetic and epigenetic variations accumulate, and the extrinsic disturbances become too large to maintain the normal phenotype at the desired equilibrium point for the nonlinear evolutionary biological network. Thus, the network is shifted to a cancer phenotype at a new equilibrium point that begins a new evolutionary process. In this study, the natural selection scheme of an evolutionary biological network of carcinogenesis was derived from a robust negative feedback scheme based on the nonlinear stochastic Nash game strategy. The evolvability and phenotypic robustness criteria of the evolutionary cancer network were also estimated by solving a Hamilton–Jacobi inequality – constrained optimization problem. The simulation revealed that the phenotypic shift of the lung cancer-associated cell network takes 54.5 years from a normal state to stage I cancer, 1.5 years from stage I to stage II cancer, and 2.5 years from stage II to stage III cancer, with a reasonable match for the statistical result of the average age of lung cancer. These results suggest that a robust negative feedback scheme, based on a stochastic evolutionary game strategy, plays a critical role in an evolutionary biological network of carcinogenesis under a natural selection scheme. PMID:26244004

  3. The interactions between soil-biosphere-atmosphere land surface model with a multi-energy balance (ISBA-MEB) option in SURFEXv8 - Part 1: Model description

    NASA Astrophysics Data System (ADS)

    Boone, Aaron; Samuelsson, Patrick; Gollvik, Stefan; Napoly, Adrien; Jarlan, Lionel; Brun, Eric; Decharme, Bertrand

    2017-02-01

    Land surface models (LSMs) are pushing towards improved realism owing to an increasing number of observations at the local scale, constantly improving satellite data sets and the associated methodologies to best exploit such data, improved computing resources, and in response to the user community. As a part of the trend in LSM development, there have been ongoing efforts to improve the representation of the land surface processes in the interactions between the soil-biosphere-atmosphere (ISBA) LSM within the EXternalized SURFace (SURFEX) model platform. The force-restore approach in ISBA has been replaced in recent years by multi-layer explicit physically based options for sub-surface heat transfer, soil hydrological processes, and the composite snowpack. The representation of vegetation processes in SURFEX has also become much more sophisticated in recent years, including photosynthesis and respiration and biochemical processes. It became clear that the conceptual limits of the composite soil-vegetation scheme within ISBA had been reached and there was a need to explicitly separate the canopy vegetation from the soil surface. In response to this issue, a collaboration began in 2008 between the high-resolution limited area model (HIRLAM) consortium and Météo-France with the intention to develop an explicit representation of the vegetation in ISBA under the SURFEX platform. A new parameterization has been developed called the ISBA multi-energy balance (MEB) in order to address these issues. ISBA-MEB consists in a fully implicit numerical coupling between a multi-layer physically based snowpack model, a variable-layer soil scheme, an explicit litter layer, a bulk vegetation scheme, and the atmosphere. It also includes a feature that permits a coupling transition of the snowpack from the canopy air to the free atmosphere. It shares many of the routines and physics parameterizations with the standard version of ISBA. This paper is the first of two parts; in part one, the ISBA-MEB model equations, numerical schemes, and theoretical background are presented. In part two (Napoly et al., 2016), which is a separate companion paper, a local scale evaluation of the new scheme is presented along with a detailed description of the new forest litter scheme.

  4. RICH: OPEN-SOURCE HYDRODYNAMIC SIMULATION ON A MOVING VORONOI MESH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yalinewich, Almog; Steinberg, Elad; Sari, Re’em

    2015-02-01

    We present here RICH, a state-of-the-art two-dimensional hydrodynamic code based on Godunov’s method, on an unstructured moving mesh (the acronym stands for Racah Institute Computational Hydrodynamics). This code is largely based on the code AREPO. It differs from AREPO in the interpolation and time-advancement schemeS as well as a novel parallelization scheme based on Voronoi tessellation. Using our code, we study the pros and cons of a moving mesh (in comparison to a static mesh). We also compare its accuracy to other codes. Specifically, we show that our implementation of external sources and time-advancement scheme is more accurate and robustmore » than is AREPO when the mesh is allowed to move. We performed a parameter study of the cell rounding mechanism (Lloyd iterations) and its effects. We find that in most cases a moving mesh gives better results than a static mesh, but it is not universally true. In the case where matter moves in one way and a sound wave is traveling in the other way (such that relative to the grid the wave is not moving) a static mesh gives better results than a moving mesh. We perform an analytic analysis for finite difference schemes that reveals that a Lagrangian simulation is better than a Eulerian simulation in the case of a highly supersonic flow. Moreover, we show that Voronoi-based moving mesh schemes suffer from an error, which is resolution independent, due to inconsistencies between the flux calculation and the change in the area of a cell. Our code is publicly available as open source and designed in an object-oriented, user-friendly way that facilitates incorporation of new algorithms and physical processes.« less

  5. Smith predictor based-sliding mode controller for integrating processes with elevated deadtime.

    PubMed

    Camacho, Oscar; De la Cruz, Francisco

    2004-04-01

    An approach to control integrating processes with elevated deadtime using a Smith predictor sliding mode controller is presented. A PID sliding surface and an integrating first-order plus deadtime model have been used to synthesize the controller. Since the performance of existing controllers with a Smith predictor decrease in the presence of modeling errors, this paper presents a simple approach to combining the Smith predictor with the sliding mode concept, which is a proven, simple, and robust procedure. The proposed scheme has a set of tuning equations as a function of the characteristic parameters of the model. For implementation of our proposed approach, computer based industrial controllers that execute PID algorithms can be used. The performance and robustness of the proposed controller are compared with the Matausek-Micić scheme for linear systems using simulations.

  6. Kalman filters for assimilating near-surface observations into the Richards equation - Part 1: Retrieving state profiles with linear and nonlinear numerical schemes

    NASA Astrophysics Data System (ADS)

    Chirico, G. B.; Medina, H.; Romano, N.

    2014-07-01

    This paper examines the potential of different algorithms, based on the Kalman filtering approach, for assimilating near-surface observations into a one-dimensional Richards equation governing soil water flow in soil. Our specific objectives are: (i) to compare the efficiency of different Kalman filter algorithms in retrieving matric pressure head profiles when they are implemented with different numerical schemes of the Richards equation; (ii) to evaluate the performance of these algorithms when nonlinearities arise from the nonlinearity of the observation equation, i.e. when surface soil water content observations are assimilated to retrieve matric pressure head values. The study is based on a synthetic simulation of an evaporation process from a homogeneous soil column. Our first objective is achieved by implementing a Standard Kalman Filter (SKF) algorithm with both an explicit finite difference scheme (EX) and a Crank-Nicolson (CN) linear finite difference scheme of the Richards equation. The Unscented (UKF) and Ensemble Kalman Filters (EnKF) are applied to handle the nonlinearity of a backward Euler finite difference scheme. To accomplish the second objective, an analogous framework is applied, with the exception of replacing SKF with the Extended Kalman Filter (EKF) in combination with a CN numerical scheme, so as to handle the nonlinearity of the observation equation. While the EX scheme is computationally too inefficient to be implemented in an operational assimilation scheme, the retrieval algorithm implemented with a CN scheme is found to be computationally more feasible and accurate than those implemented with the backward Euler scheme, at least for the examined one-dimensional problem. The UKF appears to be as feasible as the EnKF when one has to handle nonlinear numerical schemes or additional nonlinearities arising from the observation equation, at least for systems of small dimensionality as the one examined in this study.

  7. Branching Search

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2017-12-01

    Search processes play key roles in various scientific fields. A widespread and effective search-process scheme, which we term Restart Search, is based on the following restart algorithm: i) set a timer and initiate a search task; ii) if the task was completed before the timer expired, then stop; iii) if the timer expired before the task was completed, then go back to the first step and restart the search process anew. In this paper a branching feature is added to the restart algorithm: at every transition from the algorithm's third step to its first step branching takes place, thus multiplying the search effort. This branching feature yields a search-process scheme which we term Branching Search. The running time of Branching Search is analyzed, closed-form results are established, and these results are compared to the coresponding running-time results of Restart Search.

  8. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  9. Control of parallel manipulators using force feedback

    NASA Technical Reports Server (NTRS)

    Nanua, Prabjot

    1994-01-01

    Two control schemes are compared for parallel robotic mechanisms actuated by hydraulic cylinders. One scheme, the 'rate based scheme', uses the position and rate information only for feedback. The second scheme, the 'force based scheme' feeds back the force information also. The force control scheme is shown to improve the response over the rate control one. It is a simple constant gain control scheme better suited to parallel mechanisms. The force control scheme can be easily modified for the dynamic forces on the end effector. This paper presents the results of a computer simulation of both the rate and force control schemes. The gains in the force based scheme can be individually adjusted in all three directions, whereas the adjustment in just one direction of the rate based scheme directly affects the other two directions.

  10. Centrifuge: rapid and sensitive classification of metagenomic sequences.

    PubMed

    Kim, Daehwan; Song, Li; Breitwieser, Florian P; Salzberg, Steven L

    2016-12-01

    Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. © 2016 Kim et al.; Published by Cold Spring Harbor Laboratory Press.

  11. Optimal placement of fast cut back units based on the theory of cellular automata and agent

    NASA Astrophysics Data System (ADS)

    Yan, Jun; Yan, Feng

    2017-06-01

    The thermal power generation units with the function of fast cut back could serve power for auxiliary system and keep island operation after a major blackout, so they are excellent substitute for the traditional black-start power sources. Different placement schemes for FCB units have different influence on the subsequent restoration process. Considering the locality of the emergency dispatching rules, the unpredictability of specific dispatching instructions and unexpected situations like failure of transmission line energization, a novel deduction model for network reconfiguration based on the theory of cellular automata and agent is established. Several indexes are then defined for evaluating the placement schemes for FCB units. The attribute weights determination method based on subjective and objective integration and grey relational analysis are combinatorically used to determine the optimal placement scheme for FCB unit. The effectiveness of the proposed method is validated by the test results on the New England 10-unit 39-bus power system.

  12. A kernel-based novelty detection scheme for the ultra-fast detection of chirp evoked Auditory Brainstem Responses.

    PubMed

    Corona-Strauss, Farah I; Delb, Wolfgang; Schick, Bernhard; Strauss, Daniel J

    2010-01-01

    Auditory Brainstem Responses (ABRs) are used as objective method for diagnostics and quantification of hearing loss. Many methods for automatic recognition of ABRs have been developed, but none of them include the individual measurement setup in the analysis. The purpose of this work was to design a fast recognition scheme for chirp-evoked ABRs that is adjusted to the individual measurement condition using spontaneous electroencephalographic activity (SA). For the classification, the kernel-based novelty detection scheme used features based on the inter-sweep instantaneous phase synchronization as well as energy and entropy relations in the time-frequency domain. This method provided SA discrimination from stimulations above the hearing threshold with a minimum number of sweeps, i.e., 200 individual responses. It is concluded that the proposed paradigm, processing procedures and stimulation techniques improve the detection of ABRs in terms of the degree of objectivity, i.e., automation of procedure, and measurement time.

  13. Onboard image compression schemes for modular airborne imaging spectrometer (MAIS) based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Zhu, Zhenyu; Wang, Jianyu

    1996-11-01

    In this paper, two compression schemes are presented to meet the urgent needs of compressing the huge volume and high data rate of imaging spectrometer images. According to the multidimensional feature of the images and the high fidelity requirement of the reconstruction, both schemes were devised to exploit the high redundancy in both spatial and spectral dimension based on the mature wavelet transform technology. Wavelet transform was applied here in two ways: First, with the spatial wavelet transform and the spectral DPCM decorrelation, a ratio up to 84.3 with PSNR > 48db's near-lossless result was attained. This is based ont he fact that the edge structure among all the spectral bands are similar while WT has higher resolution in high frequency components. Secondly, with the wavelet's high efficiency in processing the 'wideband transient' signals, it was used to transform the raw nonstationary signals in the spectral dimension. A good result was also attained.

  14. A controlled ac Stark echo for quantum memories.

    PubMed

    Ham, Byoung S

    2017-08-09

    A quantum memory protocol of controlled ac Stark echoes (CASE) based on a double rephasing photon echo scheme via controlled Rabi flopping is proposed. The double rephasing scheme of photon echoes inherently satisfies the no-population inversion requirement for quantum memories, but the resultant absorptive echo remains a fundamental problem. Herein, it is reported that the first echo in the double rephasing scheme can be dynamically controlled so that it does not affect the second echo, which is accomplished by using unbalanced ac Stark shifts. Then, the second echo is coherently controlled to be emissive via controlled coherence conversion. Finally a near perfect ultralong CASE is presented using a backward echo scheme. Compared with other methods such as dc Stark echoes, the present protocol is all-optical with advantages of wavelength-selective dynamic control of quantum processing for erasing, buffering, and channel multiplexing.

  15. Controllable high-fidelity quantum state transfer and entanglement generation in circuit QED

    PubMed Central

    Xu, Peng; Yang, Xu-Chen; Mei, Feng; Xue, Zheng-Yuan

    2016-01-01

    We propose a scheme to realize controllable quantum state transfer and entanglement generation among transmon qubits in the typical circuit QED setup based on adiabatic passage. Through designing the time-dependent driven pulses applied on the transmon qubits, we find that fast quantum sate transfer can be achieved between arbitrary two qubits and quantum entanglement among the qubits also can also be engineered. Furthermore, we numerically analyzed the influence of the decoherence on our scheme with the current experimental accessible systematical parameters. The result shows that our scheme is very robust against both the cavity decay and qubit relaxation, the fidelities of the state transfer and entanglement preparation process could be very high. In addition, our scheme is also shown to be insensitive to the inhomogeneous of qubit-resonator coupling strengths. PMID:26804326

  16. APOLLO_NG - a probabilistic interpretation of the APOLLO legacy for AVHRR heritage channels

    NASA Astrophysics Data System (ADS)

    Klüser, L.; Killius, N.; Gesell, G.

    2015-10-01

    The cloud processing scheme APOLLO (AVHRR Processing scheme Over cLouds, Land and Ocean) has been in use for cloud detection and cloud property retrieval since the late 1980s. The physics of the APOLLO scheme still build the backbone of a range of cloud detection algorithms for AVHRR (Advanced Very High Resolution Radiometer) heritage instruments. The APOLLO_NG (APOLLO_NextGeneration) cloud processing scheme is a probabilistic interpretation of the original APOLLO method. It builds upon the physical principles that have served well in the original APOLLO scheme. Nevertheless, a couple of additional variables have been introduced in APOLLO_NG. Cloud detection is no longer performed as a binary yes/no decision based on these physical principles. It is rather expressed as cloud probability for each satellite pixel. Consequently, the outcome of the algorithm can be tuned from being sure to reliably identify clear pixels to conditions of reliably identifying definitely cloudy pixels, depending on the purpose. The probabilistic approach allows retrieving not only the cloud properties (optical depth, effective radius, cloud top temperature and cloud water path) but also their uncertainties. APOLLO_NG is designed as a standalone cloud retrieval method robust enough for operational near-realtime use and for application to large amounts of historical satellite data. The radiative transfer solution is approximated by the same two-stream approach which also had been used for the original APOLLO. This allows the algorithm to be applied to a wide range of sensors without the necessity of sensor-specific tuning. Moreover it allows for online calculation of the radiative transfer (i.e., within the retrieval algorithm) giving rise to a detailed probabilistic treatment of cloud variables. This study presents the algorithm for cloud detection and cloud property retrieval together with the physical principles from the APOLLO legacy it is based on. Furthermore a couple of example results from NOAA-18 are presented.

  17. Implementation of single-photon quantum routing and decoupling using a nitrogen-vacancy center and a whispering-gallery-mode resonator-waveguide system.

    PubMed

    Cao, Cong; Duan, Yu-Wen; Chen, Xi; Zhang, Ru; Wang, Tie-Jun; Wang, Chuan

    2017-07-24

    Quantum router is a key element needed for the construction of future complex quantum networks. However, quantum routing with photons, and its inverse, quantum decoupling, are difficult to implement as photons do not interact, or interact very weakly in nonlinear media. In this paper, we investigate the possibility of implementing photonic quantum routing based on effects in cavity quantum electrodynamics, and present a scheme for single-photon quantum routing controlled by the other photon using a hybrid system consisting of a single nitrogen-vacancy (NV) center coupled with a whispering-gallery-mode resonator-waveguide structure. Different from the cases in which classical information is used to control the path of quantum signals, both the control and signal photons are quantum in our implementation. Compared with the probabilistic quantum routing protocols based on linear optics, our scheme is deterministic and also scalable to multiple photons. We also present a scheme for single-photon quantum decoupling from an initial state with polarization and spatial-mode encoding, which can implement an inverse operation to the quantum routing. We discuss the feasibility of our schemes by considering current or near-future techniques, and show that both the schemes can operate effectively in the bad-cavity regime. We believe that the schemes could be key building blocks for future complex quantum networks and large-scale quantum information processing.

  18. LWT Based Sensor Node Signal Processing in Vehicle Surveillance Distributed Sensor Network

    NASA Astrophysics Data System (ADS)

    Cha, Daehyun; Hwang, Chansik

    Previous vehicle surveillance researches on distributed sensor network focused on overcoming power limitation and communication bandwidth constraints in sensor node. In spite of this constraints, vehicle surveillance sensor node must have signal compression, feature extraction, target localization, noise cancellation and collaborative signal processing with low computation and communication energy dissipation. In this paper, we introduce an algorithm for light-weight wireless sensor node signal processing based on lifting scheme wavelet analysis feature extraction in distributed sensor network.

  19. 77 FR 27831 - List of Participating Countries and Entities Under the Clean Diamond Trade Act of 2003

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-11

    ... Kimberley Process Certification Scheme (KPCS). Under Section 3(2) of the Act, ``controlled through the Kimberley Process Certification Scheme'' means an importation from the territory of a Participant or... Participants in the Kimberley Process Certification Scheme. Angola--Ministry of Geology and Mines. Armenia...

  20. 31 CFR 592.201 - Prohibited importation and exportation of any rough diamond; permitted importation or exportation...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., unless the rough diamond has been controlled through the Kimberley Process Certification Scheme. (b) The... States of any rough diamond not controlled through the Kimberley Process Certification Scheme do not... Process Certification Scheme and thus is not permitted, except in the following circumstance. The...

  1. What Is the Value of Value-Based Purchasing?

    PubMed

    Tanenbaum, Sandra J

    2016-10-01

    Value-based purchasing (VBP) is a widely favored strategy for improving the US health care system. The meaning of value that predominates in VBP schemes is (1) conformance to selected process and/or outcome metrics, and sometimes (2) such conformance at the lowest possible cost. In other words, VBP schemes choose some number of "quality indicators" and financially incent providers to meet them (and not others). Process measures are usually based on clinical science that cannot determine the effects of a process on individual patients or patients with comorbidities, and do not necessarily measure effects that patients value; additionally, there is no provision for different patients valuing different things. Proximate outcome measures may or may not predict distal ones, and the more distal the outcome, the less reliably it can be attributed to health care. Outcome measures may be quite rudimentary, such as mortality rates, or highly contestable: survival or function after prostate surgery? When cost is an element of value-based purchasing, it is the cost to the value-based payer and not to other payers or patients' families. The greatest value of value-based purchasing may not be to patients or even payers, but to policy makers seeking a morally justifiable alternative to politically contested regulatory policies. Copyright © 2016 by Duke University Press.

  2. Comparative Study of Three High Order Schemes for LES of Temporally Evolving Mixing Layers

    NASA Technical Reports Server (NTRS)

    Yee, Helen M. C.; Sjogreen, Biorn Axel; Hadjadj, C.

    2012-01-01

    Three high order shock-capturing schemes are compared for large eddy simulations (LES) of temporally evolving mixing layers (TML) for different convective Mach numbers (Mc) ranging from the quasi-incompressible regime to highly compressible supersonic regime. The considered high order schemes are fifth-order WENO (WENO5), seventh-order WENO (WENO7) and the associated eighth-order central spatial base scheme with the dissipative portion of WENO7 as a nonlinear post-processing filter step (WENO7fi). This high order nonlinear filter method (H.C. Yee and B. Sjogreen, Proceedings of ICOSAHOM09, June 22-26, 2009, Trondheim, Norway) is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The LES results by WENO7fi using the same scheme parameter agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) work of Rogers & Moser (1994) and Pantano & Sarkar (2002), whereas results by WENO5 and WENO7 compare poorly with experimental data and DNS computations.

  3. A Novel Image Encryption Scheme Based on Intertwining Chaotic Maps and RC4 Stream Cipher

    NASA Astrophysics Data System (ADS)

    Kumari, Manju; Gupta, Shailender

    2018-03-01

    As the systems are enabling us to transmit large chunks of data, both in the form of texts and images, there is a need to explore algorithms which can provide a higher security without increasing the time complexity significantly. This paper proposes an image encryption scheme which uses intertwining chaotic maps and RC4 stream cipher to encrypt/decrypt the images. The scheme employs chaotic map for the confusion stage and for generation of key for the RC4 cipher. The RC4 cipher uses this key to generate random sequences which are used to implement an efficient diffusion process. The algorithm is implemented in MATLAB-2016b and various performance metrics are used to evaluate its efficacy. The proposed scheme provides highly scrambled encrypted images and can resist statistical, differential and brute-force search attacks. The peak signal-to-noise ratio values are quite similar to other schemes, the entropy values are close to ideal. In addition, the scheme is very much practical since having lowest time complexity then its counterparts.

  4. Image Motion Detection And Estimation: The Modified Spatio-Temporal Gradient Scheme

    NASA Astrophysics Data System (ADS)

    Hsin, Cheng-Ho; Inigo, Rafael M.

    1990-03-01

    The detection and estimation of motion are generally involved in computing a velocity field of time-varying images. A completely new modified spatio-temporal gradient scheme to determine motion is proposed. This is derived by using gradient methods and properties of biological vision. A set of general constraints is proposed to derive motion constraint equations. The constraints are that the second directional derivatives of image intensity at an edge point in the smoothed image will be constant at times t and t+L . This scheme basically has two stages: spatio-temporal filtering, and velocity estimation. Initially, image sequences are processed by a set of oriented spatio-temporal filters which are designed using a Gaussian derivative model. The velocity is then estimated for these filtered image sequences based on the gradient approach. From a computational stand point, this scheme offers at least three advantages over current methods. The greatest advantage of the modified spatio-temporal gradient scheme over the traditional ones is that an infinite number of motion constraint equations are derived instead of only one. Therefore, it solves the aperture problem without requiring any additional assumptions and is simply a local process. The second advantage is that because of the spatio-temporal filtering, the direct computation of image gradients (discrete derivatives) is avoided. Therefore the error in gradients measurement is reduced significantly. The third advantage is that during the processing of motion detection and estimation algorithm, image features (edges) are produced concurrently with motion information. The reliable range of detected velocity is determined by parameters of the oriented spatio-temporal filters. Knowing the velocity sensitivity of a single motion detection channel, a multiple-channel mechanism for estimating image velocity, seldom addressed by other motion schemes in machine vision, can be constructed by appropriately choosing and combining different sets of parameters. By applying this mechanism, a great range of velocity can be detected. The scheme has been tested for both synthetic and real images. The results of simulations are very satisfactory.

  5. A zero-liquid-discharge scheme for vanadium extraction process by electrodialysis-based technology.

    PubMed

    Wang, Meng; Xing, Hong-Bo; Jia, Yu-Xiang; Ren, Qing-Chun

    2015-12-30

    The sharp increase of demand for vanadium makes the treatment of the wastewater generated from its extraction process become an urgent problem. In this study, a hybrid process coupling the electrodialysis with the cooling crystallization is put forward for upgrading the conventional vanadium extraction process to zero discharge. Accordingly, the objective of this work lies in evaluating the feasibility of the proposed scheme on the basis of a systematic study on the influences of membrane types and operating parameters on the electrodialysis performance. The results indicate that the relative importance of osmosis and electro-osmosis to overall water transport is closely related to the applied current density. The increase in the applied current density and the decrease in the mole ratio of water and salt flux will contribute to the concentration degree. Moreover, it is worth noting that a relatively large concentration ratio can result in the remarkable decrease of current efficiency and increase of energy consumption. In general, the reclamation scheme can easily achieve the recovered water with relatively low salt content and the highly concentrated Na2SO4 solution (e.g., 300 g/L) for producing high-purity sodium sulphate crystals. Copyright © 2015. Published by Elsevier B.V.

  6. A fast numerical scheme for causal relativistic hydrodynamics with dissipation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takamoto, Makoto, E-mail: takamoto@tap.scphys.kyoto-u.ac.jp; Inutsuka, Shu-ichiro

    2011-08-01

    Highlights: {yields} We have developed a new multi-dimensional numerical scheme for causal relativistic hydrodynamics with dissipation. {yields} Our new scheme can calculate the evolution of dissipative relativistic hydrodynamics faster and more effectively than existing schemes. {yields} Since we use the Riemann solver for solving the advection steps, our method can capture shocks very accurately. - Abstract: In this paper, we develop a stable and fast numerical scheme for relativistic dissipative hydrodynamics based on Israel-Stewart theory. Israel-Stewart theory is a stable and causal description of dissipation in relativistic hydrodynamics although it includes relaxation process with the timescale for collision of constituentmore » particles, which introduces stiff equations and makes practical numerical calculation difficult. In our new scheme, we use Strang's splitting method, and use the piecewise exact solutions for solving the extremely short timescale problem. In addition, since we split the calculations into inviscid step and dissipative step, Riemann solver can be used for obtaining numerical flux for the inviscid step. The use of Riemann solver enables us to capture shocks very accurately. Simple numerical examples are shown. The present scheme can be applied to various high energy phenomena of astrophysics and nuclear physics.« less

  7. Exploration of multiphoton entangled states by using weak nonlinearities

    PubMed Central

    He, Ying-Qiu; Ding, Dong; Yan, Feng-Li; Gao, Ting

    2016-01-01

    We propose a fruitful scheme for exploring multiphoton entangled states based on linear optics and weak nonlinearities. Compared with the previous schemes the present method is more feasible because there are only small phase shifts instead of a series of related functions of photon numbers in the process of interaction with Kerr nonlinearities. In the absence of decoherence we analyze the error probabilities induced by homodyne measurement and show that the maximal error probability can be made small enough even when the number of photons is large. This implies that the present scheme is quite tractable and it is possible to produce entangled states involving a large number of photons. PMID:26751044

  8. Nagy-Soper Subtraction: a Review

    NASA Astrophysics Data System (ADS)

    Robens, Tania

    2013-07-01

    In this review, we present a review on an alternative NLO subtraction scheme, based on the splitting kernels of an improved parton shower that promises to facilitate the inclusion of higher-order corrections into Monte Carlo event generators. We give expressions for the scheme for massless emitters, and point to work on the extension for massive cases. As an example, we show results for the C parameter of the process e+e-→3 jets at NLO which have recently been published as a verification of this scheme. We equally provide analytic expressions for integrated counterterms that have not been presented in previous work, and comment on the possibility of analytic approximations for the remaining numerical integrals.

  9. RiPPAS: A Ring-Based Privacy-Preserving Aggregation Scheme in Wireless Sensor Networks

    PubMed Central

    Zhang, Kejia; Han, Qilong; Cai, Zhipeng; Yin, Guisheng

    2017-01-01

    Recently, data privacy in wireless sensor networks (WSNs) has been paid increased attention. The characteristics of WSNs determine that users’ queries are mainly aggregation queries. In this paper, the problem of processing aggregation queries in WSNs with data privacy preservation is investigated. A Ring-based Privacy-Preserving Aggregation Scheme (RiPPAS) is proposed. RiPPAS adopts ring structure to perform aggregation. It uses pseudonym mechanism for anonymous communication and uses homomorphic encryption technique to add noise to the data easily to be disclosed. RiPPAS can handle both sum() queries and min()/max() queries, while the existing privacy-preserving aggregation methods can only deal with sum() queries. For processing sum() queries, compared with the existing methods, RiPPAS has advantages in the aspects of privacy preservation and communication efficiency, which can be proved by theoretical analysis and simulation results. For processing min()/max() queries, RiPPAS provides effective privacy preservation and has low communication overhead. PMID:28178197

  10. CMOS-compatible 2-bit optical spectral quantization scheme using a silicon-nanocrystal-based horizontal slot waveguide

    PubMed Central

    Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Wu, Qiang; Sang, Xinzhu; Farrell, Gerald; Yu, Chongxiu; Li, Feng; Tam, Hwa Yaw; Wai, P. K. A.

    2014-01-01

    All-optical analog-to-digital converters based on the third-order nonlinear effects in silicon waveguide are a promising candidate to overcome the limitation of electronic devices and are suitable for photonic integration. In this paper, a 2-bit optical spectral quantization scheme for on-chip all-optical analog-to-digital conversion is proposed. The proposed scheme is realized by filtering the broadened and split spectrum induced by the self-phase modulation effect in a silicon horizontal slot waveguide filled with silicon-nanocrystal. Nonlinear coefficient as high as 8708 W−1/m is obtained because of the tight mode confinement of the horizontal slot waveguide and the high nonlinear refractive index of the silicon-nanocrystal, which provides the enhanced nonlinear interaction and accordingly low power threshold. The results show that a required input peak power level less than 0.4 W can be achieved, along with the 1.98-bit effective-number-of-bit and Gray code output. The proposed scheme can find important applications in on-chip all-optical digital signal processing systems. PMID:25417847

  11. Entanglement of remote material qubits through nonexciting interaction with single photons

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zhang, Pengfei; Zhang, Tiancai

    2018-05-01

    We propose a scheme to entangle multiple material qubits through interaction with single photons via nonexciting processes associated with strongly coupling systems. The basic idea is based on the material state dependent reflection and transmission for the input photons. Thus, the material qubits in several systems can be entangled when one photon interacts with each system in cascade and the photon paths are mixed by the photon detection. The character of nonexciting of material qubits does not change the state of the material qubit and thus ensures the possibility of purifying entangled states by using more photons under realistic imperfect parameters. It also guarantees directly scaling up the scheme to entangle more qubits. Detailed analysis of fidelity and success probability of the scheme in the frame of an optical Fabry-Pérot cavity based strongly coupling system is presented. It is shown that a two-qubit entangled state with fidelity above 0.99 is promised with only two photons by using currently feasible experimental parameters. Our scheme can also be directly implemented on other strongly coupled system.

  12. CMOS-compatible 2-bit optical spectral quantization scheme using a silicon-nanocrystal-based horizontal slot waveguide.

    PubMed

    Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Wu, Qiang; Sang, Xinzhu; Farrell, Gerald; Yu, Chongxiu; Li, Feng; Tam, Hwa Yaw; Wai, P K A

    2014-11-24

    All-optical analog-to-digital converters based on the third-order nonlinear effects in silicon waveguide are a promising candidate to overcome the limitation of electronic devices and are suitable for photonic integration. In this paper, a 2-bit optical spectral quantization scheme for on-chip all-optical analog-to-digital conversion is proposed. The proposed scheme is realized by filtering the broadened and split spectrum induced by the self-phase modulation effect in a silicon horizontal slot waveguide filled with silicon-nanocrystal. Nonlinear coefficient as high as 8708 W(-1)/m is obtained because of the tight mode confinement of the horizontal slot waveguide and the high nonlinear refractive index of the silicon-nanocrystal, which provides the enhanced nonlinear interaction and accordingly low power threshold. The results show that a required input peak power level less than 0.4 W can be achieved, along with the 1.98-bit effective-number-of-bit and Gray code output. The proposed scheme can find important applications in on-chip all-optical digital signal processing systems.

  13. A molecular logic gate

    PubMed Central

    Kompa, K. L.; Levine, R. D.

    2001-01-01

    We propose a scheme for molecule-based information processing by combining well-studied spectroscopic techniques and recent results from chemical dynamics. Specifically it is discussed how optical transitions in single molecules can be used to rapidly perform classical (Boolean) logical operations. In the proposed way, a restricted number of states in a single molecule can act as a logical gate equivalent to at least two switches. It is argued that the four-level scheme can also be used to produce gain, because it allows an inversion, and not only a switching ability. The proposed scheme is quantum mechanical in that it takes advantage of the discrete nature of the energy levels but, we here discuss the temporal evolution, with the use of the populations only. On a longer time range we suggest that the same scheme could be extended to perform quantum logic, and a tentative suggestion, based on an available experiment, is discussed. We believe that the pumping can provide a partial proof of principle, although this and similar experiments were not interpreted thus far in our terms. PMID:11209046

  14. Image smoothing and enhancement via min/max curvature flow

    NASA Astrophysics Data System (ADS)

    Malladi, Ravikanth; Sethian, James A.

    1996-03-01

    We present a class of PDE-based algorithms suitable for a wide range of image processing applications. The techniques are applicable to both salt-and-pepper gray-scale noise and full- image continuous noise present in black and white images, gray-scale images, texture images and color images. At the core, the techniques rely on a level set formulation of evolving curves and surfaces and the viscosity in profile evolution. Essentially, the method consists of moving the isointensity contours in an image under curvature dependent speed laws to achieve enhancement. Compared to existing techniques, our approach has several distinct advantages. First, it contains only one enhancement parameter, which in most cases is automatically chosen. Second, the scheme automatically stops smoothing at some optimal point; continued application of the scheme produces no further change. Third, the method is one of the fastest possible schemes based on a curvature-controlled approach.

  15. Research on crude oil storage and transportation based on optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Xuhua

    2018-04-01

    At present, the optimization theory and method have been widely used in the optimization scheduling and optimal operation scheme of complex production systems. Based on C++Builder 6 program development platform, the theoretical research results are implemented by computer. The simulation and intelligent decision system of crude oil storage and transportation inventory scheduling are designed. The system includes modules of project management, data management, graphics processing, simulation of oil depot operation scheme. It can realize the optimization of the scheduling scheme of crude oil storage and transportation system. A multi-point temperature measuring system for monitoring the temperature field of floating roof oil storage tank is developed. The results show that by optimizing operating parameters such as tank operating mode and temperature, the total transportation scheduling costs of the storage and transportation system can be reduced by 9.1%. Therefore, this method can realize safe and stable operation of crude oil storage and transportation system.

  16. A Bookmarking Service for Organizing and Sharing URLs

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Wolfe, Shawn R.; Chen, James R.; Mathe, Nathalie; Rabinowitz, Joshua L.

    1997-01-01

    Web browser bookmarking facilities predominate as the method of choice for managing URLs. In this paper, we describe some deficiencies of current bookmarking schemes, and examine an alternative to current approaches. We present WebTagger(TM), an implemented prototype of a personal bookmarking service that provides both individuals and groups with a customizable means of organizing and accessing Web-based information resources. In addition, the service enables users to supply feedback on the utility of these resources relative to their information needs, and provides dynamically-updated ranking of resources based on incremental user feedback. Individuals may access the service from anywhere on the Internet, and require no special software. This service greatly simplifies the process of sharing URLs within groups, in comparison with manual methods involving email. The underlying bookmark organization scheme is more natural and flexible than current hierarchical schemes supported by the major Web browsers, and enables rapid access to stored bookmarks.

  17. Gait Characteristic Analysis and Identification Based on the iPhone's Accelerometer and Gyrometer

    PubMed Central

    Sun, Bing; Wang, Yang; Banda, Jacob

    2014-01-01

    Gait identification is a valuable approach to identify humans at a distance. In this paper, gait characteristics are analyzed based on an iPhone's accelerometer and gyrometer, and a new approach is proposed for gait identification. Specifically, gait datasets are collected by the triaxial accelerometer and gyrometer embedded in an iPhone. Then, the datasets are processed to extract gait characteristic parameters which include gait frequency, symmetry coefficient, dynamic range and similarity coefficient of characteristic curves. Finally, a weighted voting scheme dependent upon the gait characteristic parameters is proposed for gait identification. Four experiments are implemented to validate the proposed scheme. The attitude and acceleration solutions are verified by simulation. Then the gait characteristics are analyzed by comparing two sets of actual data, and the performance of the weighted voting identification scheme is verified by 40 datasets of 10 subjects. PMID:25222034

  18. Method for generating maximally entangled states of multiple three-level atoms in cavity QED

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin Guangsheng; Li Shushen; Feng Songlin

    2004-03-01

    We propose a scheme to generate maximally entangled states (MESs) of multiple three-level atoms in microwave cavity QED based on the resonant atom-cavity interaction. In the scheme, multiple three-level atoms initially in their ground states are sequently sent through two suitably prepared cavities. After a process of appropriate atom-cavity interaction, a subsequent measurement on the second cavity field projects the atoms onto the MESs. The practical feasibility of this method is also discussed.

  19. GENERAL: Teleportation of a Bipartite Entangled Coherent State via a Four-Partite Cluster-Type Entangled State

    NASA Astrophysics Data System (ADS)

    Chen, Hui-Na; Liu, Jin-Ming

    2009-10-01

    We present an optical scheme to almost completely teleport a bipartite entangled coherent state using a four-partite cluster-type entangled coherent state as quantum channel. The scheme is based on optical elements such as beam splitters, phase shifters, and photon detectors. We also obtain the average fidelity of the teleportation process. It is shown that the average fidelity is quite close to unity if the mean photon number of the coherent state is not too small.

  20. MultiScheme: A Parallel Processing System Based on MIT (Massachusetts Institute of Technology) Scheme.

    DTIC Science & Technology

    1987-09-01

    Later, when these alloca- :t)il t rate(-ies beconme a p)erfornmance concern, the schieduler can be inolded 4 toN fit the p~ articular appllicationi... distracts attention from the more important points that this example is intended to demonstrate. The implementation, therefore, is described separately in...for the benefit of outsiders. From the object’s point of view the pipeline is nothing but a list of messages that tell it how to mutate its own state

  1. Large Area Crop Inventory Experiment (LACIE). An early estimate of small grains acreage

    NASA Technical Reports Server (NTRS)

    Lea, R. N.; Kern, D. M. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. A major advantage of this scheme is that it needs minimal human intervention. The entire scheme, with the exception of the choice of dates, can be computerized and the results obtained in minutes. The decision to limit the number of acquisitions processed to four was made to facilitate operation on the particular computer being used. Some earlier runs on another computer system were based on as many as seven biophase-1 acquisitions.

  2. Scoring in genetically modified organism proficiency tests based on log-transformed results.

    PubMed

    Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P

    2006-01-01

    The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.

  3. Multi-criteria decision-making on assessment of proposed tidal barrage schemes in terms of environmental impacts.

    PubMed

    Wu, Yunna; Xu, Chuanbo; Ke, Yiming; Chen, Kaifeng; Xu, Hu

    2017-12-15

    For tidal range power plants to be sustainable, the environmental impacts caused by the implement of various tidal barrage schemes must be assessed before construction. However, several problems exist in the current researches: firstly, evaluation criteria of the tidal barrage schemes environmental impact assessment (EIA) are not adequate; secondly, uncertainty of criteria information fails to be processed properly; thirdly, correlation among criteria is unreasonably measured. Hence the contributions of this paper are as follows: firstly, an evaluation criteria system is established from three dimensions of hydrodynamic, biological and morphological aspects. Secondly, cloud model is applied to describe the uncertainty of criteria information. Thirdly, Choquet integral with respect to λ-fuzzy measure is introduced to measure the correlation among criteria. On the above bases, a multi-criteria decision-making decision framework for tidal barrage scheme EIA is established to select the optimal scheme. Finally, a case study demonstrates the effectiveness of the proposed framework. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Moving template analysis of crack growth. 1: Procedure development

    NASA Astrophysics Data System (ADS)

    Padovan, Joe; Guo, Y. H.

    1994-06-01

    Based on a moving template procedure, this two part series will develop a method to follow the crack tip physics in a self-adaptive manner which provides a uniformly accurate prediction of crack growth. For multiple crack environments, this is achieved by attaching a moving template to each crack tip. The templates are each individually oriented to follow the associated growth orientation and rate. In this part, the essentials of the procedure are derived for application to fatigue crack environments. Overall the scheme derived possesses several hierarchical levels, i.e. the global model, the interpolatively tied moving template, and a multilevel element death option to simulate the crack wake. To speed up computation, the hierarchical polytree scheme is used to reorganize the global stiffness inversion process. In addition to developing the various features of the scheme, the accuracy of predictions for various crack lengths is also benchmarked. Part 2 extends the scheme to multiple crack problems. Extensive benchmarking is also presented to verify the scheme.

  5. Assessment strategies for municipal selective waste collection schemes.

    PubMed

    Ferreira, Fátima; Avelino, Catarina; Bentes, Isabel; Matos, Cristina; Teixeira, Carlos Afonso

    2017-01-01

    An important strategy to promote a strong sustainable growth relies on an efficient municipal waste management, and phasing out waste landfilling through waste prevention and recycling emerges as a major target. For this purpose, effective collection schemes are required, in particular those regarding selective waste collection, pursuing a more efficient and high quality recycling of reusable materials. This paper addresses the assessment and benchmarking of selective collection schemes, relevant to guide future operational improvements. In particular, the assessment is based on the monitoring and statistical analysis of a core-set of performance indicators that highlights collection trends, complemented with a performance index that gathers a weighted linear combination of these indicators. This combined analysis underlines a potential tool to support decision makers involved in the process of selecting the collection scheme with best overall performance. The presented approach was applied to a case study conducted in Oporto Municipality, with data gathered from two distinct selective collection schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. A Model-Free Scheme for Meme Ranking in Social Media.

    PubMed

    He, Saike; Zheng, Xiaolong; Zeng, Daniel

    2016-01-01

    The prevalence of social media has greatly catalyzed the dissemination and proliferation of online memes (e.g., ideas, topics, melodies, tags, etc.). However, this information abundance is exceeding the capability of online users to consume it. Ranking memes based on their popularities could promote online advertisement and content distribution. Despite such importance, few existing work can solve this problem well. They are either daunted by unpractical assumptions or incapability of characterizing dynamic information. As such, in this paper, we elaborate a model-free scheme to rank online memes in the context of social media. This scheme is capable to characterize the nonlinear interactions of online users, which mark the process of meme diffusion. Empirical studies on two large-scale, real-world datasets (one in English and one in Chinese) demonstrate the effectiveness and robustness of the proposed scheme. In addition, due to its fine-grained modeling of user dynamics, this ranking scheme can also be utilized to explain meme popularity through the lens of social influence.

  7. A Model-Free Scheme for Meme Ranking in Social Media

    PubMed Central

    He, Saike; Zheng, Xiaolong; Zeng, Daniel

    2015-01-01

    The prevalence of social media has greatly catalyzed the dissemination and proliferation of online memes (e.g., ideas, topics, melodies, tags, etc.). However, this information abundance is exceeding the capability of online users to consume it. Ranking memes based on their popularities could promote online advertisement and content distribution. Despite such importance, few existing work can solve this problem well. They are either daunted by unpractical assumptions or incapability of characterizing dynamic information. As such, in this paper, we elaborate a model-free scheme to rank online memes in the context of social media. This scheme is capable to characterize the nonlinear interactions of online users, which mark the process of meme diffusion. Empirical studies on two large-scale, real-world datasets (one in English and one in Chinese) demonstrate the effectiveness and robustness of the proposed scheme. In addition, due to its fine-grained modeling of user dynamics, this ranking scheme can also be utilized to explain meme popularity through the lens of social influence. PMID:26823638

  8. A Secure and Efficient Scalable Secret Image Sharing Scheme with Flexible Shadow Sizes

    PubMed Central

    Xie, Dong; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-01-01

    In a general (k, n) scalable secret image sharing (SSIS) scheme, the secret image is shared by n participants and any k or more than k participants have the ability to reconstruct it. The scalability means that the amount of information in the reconstructed image scales in proportion to the number of the participants. In most existing SSIS schemes, the size of each image shadow is relatively large and the dealer does not has a flexible control strategy to adjust it to meet the demand of differen applications. Besides, almost all existing SSIS schemes are not applicable under noise circumstances. To address these deficiencies, in this paper we present a novel SSIS scheme based on a brand-new technique, called compressed sensing, which has been widely used in many fields such as image processing, wireless communication and medical imaging. Our scheme has the property of flexibility, which means that the dealer can achieve a compromise between the size of each shadow and the quality of the reconstructed image. In addition, our scheme has many other advantages, including smooth scalability, noise-resilient capability, and high security. The experimental results and the comparison with similar works demonstrate the feasibility and superiority of our scheme. PMID:28072851

  9. Frequency domain laser velocimeter signal processor: A new signal processing scheme

    NASA Technical Reports Server (NTRS)

    Meyers, James F.; Clemmons, James I., Jr.

    1987-01-01

    A new scheme for processing signals from laser velocimeter systems is described. The technique utilizes the capabilities of advanced digital electronics to yield a smart instrument that is able to configure itself, based on the characteristics of the input signals, for optimum measurement accuracy. The signal processor is composed of a high-speed 2-bit transient recorder for signal capture and a combination of adaptive digital filters with energy and/or zero crossing detection signal processing. The system is designed to accept signals with frequencies up to 100 MHz with standard deviations up to 20 percent of the average signal frequency. Results from comparative simulation studies indicate measurement accuracies 2.5 times better than with a high-speed burst counter, from signals with as few as 150 photons per burst.

  10. Sensing of molecules using quantum dynamics

    PubMed Central

    Migliore, Agostino; Naaman, Ron; Beratan, David N.

    2015-01-01

    We design sensors where information is transferred between the sensing event and the actuator via quantum relaxation processes, through distances of a few nanometers. We thus explore the possibility of sensing using intrinsically quantum mechanical phenomena that are also at play in photobiology, bioenergetics, and information processing. Specifically, we analyze schemes for sensing based on charge transfer and polarization (electronic relaxation) processes. These devices can have surprising properties. Their sensitivity can increase with increasing separation between the sites of sensing (the receptor) and the actuator (often a solid-state substrate). This counterintuitive response and other quantum features give these devices favorable characteristics, such as enhanced sensitivity and selectivity. Using coherent phenomena at the core of molecular sensing presents technical challenges but also suggests appealing schemes for molecular sensing and information transfer in supramolecular structures. PMID:25911636

  11. Process-based quality for thermal spray via feedback control

    NASA Astrophysics Data System (ADS)

    Dykhuizen, R. C.; Neiser, R. A.

    2006-09-01

    Quality control of a thermal spray system manufacturing process is difficult due to the many input variables that need to be controlled. Great care must be taken to ensure that the process remains constant to obtain a consistent quality of the parts. Control is greatly complicated by the fact that measurement of particle velocities and temperatures is a noisy stochastic process. This article illustrates the application of quality control concepts to a wire flame spray process. A central feature of the real-time control system is an automatic feedback control scheme that provides fine adjustments to ensure that uncontrolled variations are accommodated. It is shown how the control vectors can be constructed from simple process maps to independently control particle velocity and temperature. This control scheme is shown to perform well in a real production environment. We also demonstrate that slight variations in the feed wire curvature can greatly influence the process. Finally, the geometry of the spray system and sensor must remain constant for the best reproducibility.

  12. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network

    NASA Astrophysics Data System (ADS)

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-03-01

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.

  13. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network.

    PubMed

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-03-21

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices' non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.

  14. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network

    PubMed Central

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-01-01

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing. PMID:28322262

  15. Research on Knowledge-Based Optimization Method of Indoor Location Based on Low Energy Bluetooth

    NASA Astrophysics Data System (ADS)

    Li, C.; Li, G.; Deng, Y.; Wang, T.; Kang, Z.

    2017-09-01

    With the rapid development of LBS (Location-based Service), the demand for commercialization of indoor location has been increasing, but its technology is not perfect. Currently, the accuracy of indoor location, the complexity of the algorithm, and the cost of positioning are hard to be simultaneously considered and it is still restricting the determination and application of mainstream positioning technology. Therefore, this paper proposes a method of knowledge-based optimization of indoor location based on low energy Bluetooth. The main steps include: 1) The establishment and application of a priori and posterior knowledge base. 2) Primary selection of signal source. 3) Elimination of positioning gross error. 4) Accumulation of positioning knowledge. The experimental results show that the proposed algorithm can eliminate the signal source of outliers and improve the accuracy of single point positioning in the simulation data. The proposed scheme is a dynamic knowledge accumulation rather than a single positioning process. The scheme adopts cheap equipment and provides a new idea for the theory and method of indoor positioning. Moreover, the performance of the high accuracy positioning results in the simulation data shows that the scheme has a certain application value in the commercial promotion.

  16. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  17. Suppression of Lateral Diffusion and Surface Leakage Currents in nBn Photodetectors Using an Inverted Design

    NASA Astrophysics Data System (ADS)

    Du, X.; Savich, G. R.; Marozas, B. T.; Wicks, G. W.

    2018-02-01

    Surface leakage and lateral diffusion currents in InAs-based nBn photodetectors have been investigated. Devices fabricated using a shallow etch processing scheme that etches through the top contact and stops at the barrier exhibited large lateral diffusion current but undetectably low surface leakage. Such large lateral diffusion current significantly increased the dark current, especially in small devices, and causes pixel-to-pixel crosstalk in detector arrays. To eliminate the lateral diffusion current, two different approaches were examined. The conventional solution utilized a deep etch process, which etches through the top contact, barrier, and absorber. This deep etch processing scheme eliminated lateral diffusion, but introduced high surface current along the device mesa sidewalls, increasing the dark current. High device failure rate was also observed in deep-etched nBn structures. An alternative approach to limit lateral diffusion used an inverted nBn structure that has its absorber grown above the barrier. Like the shallow etch process on conventional nBn structures, the inverted nBn devices were fabricated with a processing scheme that only etches the top layer (the absorber, in this case) but avoids etching through the barrier. The results show that inverted nBn devices have the advantage of eliminating the lateral diffusion current without introducing elevated surface current.

  18. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  19. Introduction to the Apollo collections: Part 2: Lunar breccias

    NASA Technical Reports Server (NTRS)

    Mcgee, P. E.; Simonds, C. H.; Warner, J. L.; Phinney, W. C.

    1979-01-01

    Basic petrographic, chemical and age data for a representative suite of lunar breccias are presented for students and potential lunar sample investigators. Emphasis is on sample description and data presentation. Samples are listed, together with a classification scheme based on matrix texture and mineralogy and the nature and abundance of glass present both in the matrix and as clasts. A calculus of the classification scheme, describes the characteristic features of each of the breccia groups. The cratering process which describes the sequence of events immediately following an impact event is discussed, especially the thermal and material transport processes affecting the two major components of lunar breccias (clastic debris and fused material).

  20. Luminescent sensing and imaging of oxygen: fierce competition to the Clark electrode.

    PubMed

    Wolfbeis, Otto S

    2015-08-01

    Luminescence-based sensing schemes for oxygen have experienced a fast growth and are in the process of replacing the Clark electrode in many fields. Unlike electrodes, sensing is not limited to point measurements via fiber optic microsensors, but includes additional features such as planar sensing, imaging, and intracellular assays using nanosized sensor particles. In this essay, I review and discuss the essentials of (i) common solid-state sensor approaches based on the use of luminescent indicator dyes and host polymers; (ii) fiber optic and planar sensing schemes; (iii) nanoparticle-based intracellular sensing; and (iv) common spectroscopies. Optical sensors are also capable of multiple simultaneous sensing (such as O2 and temperature). Sensors for O2 are produced nowadays in large quantities in industry. Fields of application include sensing of O2 in plant and animal physiology, in clinical chemistry, in marine sciences, in the chemical industry and in process biotechnology. © 2015 The Author. Bioessays published by WILEY Periodicals, Inc.

  1. Design of a superconducting 28 GHz ion source magnet for FRIB using a shell-based support structure

    DOE PAGES

    Felice, H.; Rochepault, E.; Hafalia, R.; ...

    2014-12-05

    The Superconducting Magnet Program at the Lawrence Berkeley National Laboratory (LBNL) is completing the design of a 28 GHz NbTi ion source magnet for the Facility for Rare Isotope Beams (FRIB). The design parameters are based on the parameters of the ECR ion source VENUS in operation at LBNL since 2002 featuring a sextupole-in-solenoids configuration. Whereas most of the magnet components (such as conductor, magnetic design, protection scheme) remain very similar to the VENUS magnet components, the support structure of the FRIB ion source uses a different concept. A shell-based support structure using bladders and keys is implemented in themore » design allowing fine tuning of the sextupole preload and reversibility of the magnet assembly process. As part of the design work, conductor insulation scheme, coil fabrication processes and assembly procedures are also explored to optimize performance. We present the main features of the design emphasizing the integrated design approach used at LBNL to achieve this result.« less

  2. Hierarchical content-based image retrieval by dynamic indexing and guided search

    NASA Astrophysics Data System (ADS)

    You, Jane; Cheung, King H.; Liu, James; Guo, Linong

    2003-12-01

    This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include: a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing, an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features.

  3. The hierarchical expert tuning of PID controllers using tools of soft computing.

    PubMed

    Karray, F; Gueaieb, W; Al-Sharhan, S

    2002-01-01

    We present soft computing-based results pertaining to the hierarchical tuning process of PID controllers located within the control loop of a class of nonlinear systems. The results are compared with PID controllers implemented either in a stand alone scheme or as a part of conventional gain scheduling structure. This work is motivated by the increasing need in the industry to design highly reliable and efficient controllers for dealing with regulation and tracking capabilities of complex processes characterized by nonlinearities and possibly time varying parameters. The soft computing-based controllers proposed are hybrid in nature in that they integrate within a well-defined hierarchical structure the benefits of hard algorithmic controllers with those having supervisory capabilities. The controllers proposed also have the distinct features of learning and auto-tuning without the need for tedious and computationally extensive online systems identification schemes.

  4. Fuzzy adaptive strong tracking scaled unscented Kalman filter for initial alignment of large misalignment angles

    NASA Astrophysics Data System (ADS)

    Li, Jing; Song, Ningfang; Yang, Gongliu; Jiang, Rui

    2016-07-01

    In the initial alignment process of strapdown inertial navigation system (SINS), large misalignment angles always bring nonlinear problem, which can usually be processed using the scaled unscented Kalman filter (SUKF). In this paper, the problem of large misalignment angles in SINS alignment is further investigated, and the strong tracking scaled unscented Kalman filter (STSUKF) is proposed with fixed parameters to improve convergence speed, while these parameters are artificially constructed and uncertain in real application. To further improve the alignment stability and reduce the parameters selection, this paper proposes a fuzzy adaptive strategy combined with STSUKF (FUZZY-STSUKF). As a result, initial alignment scheme of large misalignment angles based on FUZZY-STSUKF is designed and verified by simulations and turntable experiment. The results show that the scheme improves the accuracy and convergence speed of SINS initial alignment compared with those based on SUKF and STSUKF.

  5. Fuzzy Matching Based on Gray-scale Difference for Quantum Images

    NASA Astrophysics Data System (ADS)

    Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia

    2018-05-01

    Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.

  6. Improved fuzzy PID controller design using predictive functional control structure.

    PubMed

    Wang, Yuzhong; Jin, Qibing; Zhang, Ridong

    2017-11-01

    In conventional PID scheme, the ensemble control performance may be unsatisfactory due to limited degrees of freedom under various kinds of uncertainty. To overcome this disadvantage, a novel PID control method that inherits the advantages of fuzzy PID control and the predictive functional control (PFC) is presented and further verified on the temperature model of a coke furnace. Based on the framework of PFC, the prediction of the future process behavior is first obtained using the current process input signal. Then, the fuzzy PID control based on the multi-step prediction is introduced to acquire the optimal control law. Finally, the case study on a temperature model of a coke furnace shows the effectiveness of the fuzzy PID control scheme when compared with conventional PID control and fuzzy self-adaptive PID control. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.

    PubMed

    Yang, Shengxiang

    2008-01-01

    In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.

  8. Hierarchical adaptation scheme for multiagent data fusion and resource management in situation analysis

    NASA Astrophysics Data System (ADS)

    Benaskeur, Abder R.; Roy, Jean

    2001-08-01

    Sensor Management (SM) has to do with how to best manage, coordinate and organize the use of sensing resources in a manner that synergistically improves the process of data fusion. Based on the contextual information, SM develops options for collecting further information, allocates and directs the sensors towards the achievement of the mission goals and/or tunes the parameters for the realtime improvement of the effectiveness of the sensing process. Conscious of the important role that SM has to play in modern data fusion systems, we are currently studying advanced SM Concepts that would help increase the survivability of the current Halifax and Iroquois Class ships, as well as their possible future upgrades. For this purpose, a hierarchical scheme has been proposed for data fusion and resource management adaptation, based on the control theory and within the process refinement paradigm of the JDL data fusion model, and taking into account the multi-agent model put forward by the SASS Group for the situation analysis process. The novelty of this work lies in the unified framework that has been defined for tackling the adaptation of both the fusion process and the sensor/weapon management.

  9. Sub-picowatt/kelvin resistive thermometry for probing nanoscale thermal transport.

    PubMed

    Zheng, Jianlin; Wingert, Matthew C; Dechaumphai, Edward; Chen, Renkun

    2013-11-01

    Advanced instrumentation in thermometry holds the key for experimentally probing fundamental heat transfer physics. However, instrumentation with simultaneously high thermometry resolution and low parasitic heat conduction is still not available today. Here we report a resistive thermometry scheme with ~50 μK temperature resolution and ~0.25 pW/K thermal conductance resolution, which is achieved through schemes using both modulated heating and common mode noise rejection. The suspended devices used herein have been specifically designed to possess short thermal time constants and minimal attenuation effects associated with the modulated heating current. Furthermore, we have systematically characterized the parasitic background heat conductance, which is shown to be significantly reduced using the new device design and can be effectively eliminated using a "canceling" scheme. Our results pave the way for probing fundamental nanoscale thermal transport processes using a general scheme based on resistive thermometry.

  10. An Efficient Location Verification Scheme for Static Wireless Sensor Networks.

    PubMed

    Kim, In-Hwan; Kim, Bo-Sung; Song, JooSeok

    2017-01-24

    In wireless sensor networks (WSNs), the accuracy of location information is vital to support many interesting applications. Unfortunately, sensors have difficulty in estimating their location when malicious sensors attack the location estimation process. Even though secure localization schemes have been proposed to protect location estimation process from attacks, they are not enough to eliminate the wrong location estimations in some situations. The location verification can be the solution to the situations or be the second-line defense. The problem of most of the location verifications is the explicit involvement of many sensors in the verification process and requirements, such as special hardware, a dedicated verifier and the trusted third party, which causes more communication and computation overhead. In this paper, we propose an efficient location verification scheme for static WSN called mutually-shared region-based location verification (MSRLV), which reduces those overheads by utilizing the implicit involvement of sensors and eliminating several requirements. In order to achieve this, we use the mutually-shared region between location claimant and verifier for the location verification. The analysis shows that MSRLV reduces communication overhead by 77% and computation overhead by 92% on average, when compared with the other location verification schemes, in a single sensor verification. In addition, simulation results for the verification of the whole network show that MSRLV can detect the malicious sensors by over 90% when sensors in the network have five or more neighbors.

  11. An Efficient Location Verification Scheme for Static Wireless Sensor Networks

    PubMed Central

    Kim, In-hwan; Kim, Bo-sung; Song, JooSeok

    2017-01-01

    In wireless sensor networks (WSNs), the accuracy of location information is vital to support many interesting applications. Unfortunately, sensors have difficulty in estimating their location when malicious sensors attack the location estimation process. Even though secure localization schemes have been proposed to protect location estimation process from attacks, they are not enough to eliminate the wrong location estimations in some situations. The location verification can be the solution to the situations or be the second-line defense. The problem of most of the location verifications is the explicit involvement of many sensors in the verification process and requirements, such as special hardware, a dedicated verifier and the trusted third party, which causes more communication and computation overhead. In this paper, we propose an efficient location verification scheme for static WSN called mutually-shared region-based location verification (MSRLV), which reduces those overheads by utilizing the implicit involvement of sensors and eliminating several requirements. In order to achieve this, we use the mutually-shared region between location claimant and verifier for the location verification. The analysis shows that MSRLV reduces communication overhead by 77% and computation overhead by 92% on average, when compared with the other location verification schemes, in a single sensor verification. In addition, simulation results for the verification of the whole network show that MSRLV can detect the malicious sensors by over 90% when sensors in the network have five or more neighbors. PMID:28125007

  12. Assimilating the Future for Better Forecasts and Earlier Warnings

    NASA Astrophysics Data System (ADS)

    Du, H.; Wheatcroft, E.; Smith, L. A.

    2016-12-01

    Multi-model ensembles have become popular tools to account for some of the uncertainty due to model inadequacy in weather and climate simulation-based predictions. The current multi-model forecasts focus on combining single model ensemble forecasts by means of statistical post-processing. Assuming each model is developed independently or with different primary target variables, each is likely to contain different dynamical strengths and weaknesses. Using statistical post-processing, such information is only carried by the simulations under a single model ensemble: no advantage is taken to influence simulations under the other models. A novel methodology, named Multi-model Cross Pollination in Time, is proposed for multi-model ensemble scheme with the aim of integrating the dynamical information regarding the future from each individual model operationally. The proposed approach generates model states in time via applying data assimilation scheme(s) to yield truly "multi-model trajectories". It is demonstrated to outperform traditional statistical post-processing in the 40-dimensional Lorenz96 flow. Data assimilation approaches are originally designed to improve state estimation from the past to the current time. The aim of this talk is to introduce a framework that uses data assimilation to improve model forecasts at future time (not to argue for any one particular data assimilation scheme). Illustration of applying data assimilation "in the future" to provide early warning of future high-impact events is also presented.

  13. Sensitivity of CAM5-simulated Arctic clouds and radiation to ice nucleation parameterization

    DOE PAGES

    Xie, Shaocheng; Liu, Xiaohong; Zhao, Chuanfeng; ...

    2013-08-06

    Sensitivity of Arctic clouds and radiation in the Community Atmospheric Model, version 5, to the ice nucleation process is examined by testing a new physically based ice nucleation scheme that links the variation of ice nuclei (IN) number concentration to aerosol properties. The default scheme parameterizes the IN concentration simply as a function of ice supersaturation. The new scheme leads to a significant reduction in simulated IN concentration at all latitudes while changes in cloud amounts and properties are mainly seen at high- and midlatitude storm tracks. In the Arctic, there is a considerable increase in midlevel clouds and amore » decrease in low-level clouds, which result from the complex interaction among the cloud macrophysics, microphysics, and large-scale environment. The smaller IN concentrations result in an increase in liquid water path and a decrease in ice water path caused by the slowdown of the Bergeron–Findeisen process in mixed-phase clouds. Overall, there is an increase in the optical depth of Arctic clouds, which leads to a stronger cloud radiative forcing (net cooling) at the top of the atmosphere. The comparison with satellite data shows that the new scheme slightly improves low-level cloud simulations over most of the Arctic but produces too many midlevel clouds. Considerable improvements are seen in the simulated low-level clouds and their properties when compared with Arctic ground-based measurements. As a result, issues with the observations and the model–observation comparison in the Arctic region are discussed.« less

  14. The Impact of Sampling Schemes on the Site Frequency Spectrum in Nonequilibrium Subdivided Populations

    PubMed Central

    Städler, Thomas; Haubold, Bernhard; Merino, Carlos; Stephan, Wolfgang; Pfaffelhuber, Peter

    2009-01-01

    Using coalescent simulations, we study the impact of three different sampling schemes on patterns of neutral diversity in structured populations. Specifically, we are interested in two summary statistics based on the site frequency spectrum as a function of migration rate, demographic history of the entire substructured population (including timing and magnitude of specieswide expansions), and the sampling scheme. Using simulations implementing both finite-island and two-dimensional stepping-stone spatial structure, we demonstrate strong effects of the sampling scheme on Tajima's D (DT) and Fu and Li's D (DFL) statistics, particularly under specieswide (range) expansions. Pooled samples yield average DT and DFL values that are generally intermediate between those of local and scattered samples. Local samples (and to a lesser extent, pooled samples) are influenced by local, rapid coalescence events in the underlying coalescent process. These processes result in lower proportions of external branch lengths and hence lower proportions of singletons, explaining our finding that the sampling scheme affects DFL more than it does DT. Under specieswide expansion scenarios, these effects of spatial sampling may persist up to very high levels of gene flow (Nm > 25), implying that local samples cannot be regarded as being drawn from a panmictic population. Importantly, many data sets on humans, Drosophila, and plants contain signatures of specieswide expansions and effects of sampling scheme that are predicted by our simulation results. This suggests that validating the assumption of panmixia is crucial if robust demographic inferences are to be made from local or pooled samples. However, future studies should consider adopting a framework that explicitly accounts for the genealogical effects of population subdivision and empirical sampling schemes. PMID:19237689

  15. A risk explicit interval linear programming model for uncertainty-based environmental economic optimization in the Lake Fuxian watershed, China.

    PubMed

    Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.

  16. A Risk Explicit Interval Linear Programming Model for Uncertainty-Based Environmental Economic Optimization in the Lake Fuxian Watershed, China

    PubMed Central

    Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144

  17. Hybrid Packet-Pheromone-Based Probabilistic Routing for Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Kashkouli Nejad, Keyvan; Shawish, Ahmed; Jiang, Xiaohong; Horiguchi, Susumu

    Ad-Hoc networks are collections of mobile nodes communicating using wireless media without any fixed infrastructure. Minimal configuration and quick deployment make Ad-Hoc networks suitable for emergency situations like natural disasters or military conflicts. The current Ad-Hoc networks can only support either high mobility or high transmission rate at a time because they employ static approaches in their routing schemes. However, due to the continuous expansion of the Ad-Hoc network size, node-mobility and transmission rate, the development of new adaptive and dynamic routing schemes has become crucial. In this paper we propose a new routing scheme to support high transmission rates and high node-mobility simultaneously in a big Ad-Hoc network, by combining a new proposed packet-pheromone-based approach with the Hint Based Probabilistic Protocol (HBPP) for congestion avoidance with dynamic path selection in packet forwarding process. Because of using the available feedback information, the proposed algorithm does not introduce any additional overhead. The extensive simulation-based analysis conducted in this paper indicates that the proposed algorithm offers small packet-latency and achieves a significantly higher delivery probability in comparison with the available Hint-Based Probabilistic Protocol (HBPP).

  18. Bayesian cloud detection for MERIS, AATSR, and their combination

    NASA Astrophysics Data System (ADS)

    Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.

    2015-04-01

    A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud detection schemes were designed to be numerically efficient and suited for the processing of large numbers of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient numbers of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.

  19. Fuzzy inference game approach to uncertainty in business decisions and market competitions.

    PubMed

    Oderanti, Festus Oluseyi

    2013-01-01

    The increasing challenges and complexity of business environments are making business decisions and operations more difficult for entrepreneurs to predict the outcomes of these processes. Therefore, we developed a decision support scheme that could be used and adapted to various business decision processes. These involve decisions that are made under uncertain situations such as business competition in the market or wage negotiation within a firm. The scheme uses game strategies and fuzzy inference concepts to effectively grasp the variables in these uncertain situations. The games are played between human and fuzzy players. The accuracy of the fuzzy rule base and the game strategies help to mitigate the adverse effects that a business may suffer from these uncertain factors. We also introduced learning which enables the fuzzy player to adapt over time. We tested this scheme in different scenarios and discover that it could be an invaluable tool in the hand of entrepreneurs that are operating under uncertain and competitive business environments.

  20. Learning target masks in infrared linescan imagery

    NASA Astrophysics Data System (ADS)

    Fechner, Thomas; Rockinger, Oliver; Vogler, Axel; Knappe, Peter

    1997-04-01

    In this paper we propose a neural network based method for the automatic detection of ground targets in airborne infrared linescan imagery. Instead of using a dedicated feature extraction stage followed by a classification procedure, we propose the following three step scheme: In the first step of the recognition process, the input image is decomposed into its pyramid representation, thus obtaining a multiresolution signal representation. At the lowest three levels of the Laplacian pyramid a neural network filter of moderate size is trained to indicate the target location. The last step consists of a fusion process of the several neural network filters to obtain the final result. To perform this fusion we use a belief network to combine the various filter outputs in a statistical meaningful way. In addition, the belief network allows the integration of further knowledge about the image domain. By applying this multiresolution recognition scheme, we obtain a nearly scale- and rotational invariant target recognition with a significantly decreased false alarm rate compared with a single resolution target recognition scheme.

  1. A data colocation grid framework for big data medical image processing: backend design

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-03-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.

  2. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.

    PubMed

    Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A

    2018-03-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.

  3. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design

    PubMed Central

    Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-01-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668

  4. Programmable fuzzy associative memory processor

    NASA Astrophysics Data System (ADS)

    Shao, Lan; Liu, Liren; Li, Guoqiang

    1996-02-01

    An optical system based on the method of spatial area-coding and multiple image scheme is proposed for fuzzy associative memory processing. Fuzzy maximum operation is accomplished by a ferroelectric liquid crystal PROM instead of a computer-based approach. A relative subsethood is introduced here to be used as a criterion for the recall evaluation.

  5. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

    NASA Astrophysics Data System (ADS)

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  6. A Systems Approach to the Design and Operation of Effective Marketing Programs in Community Colleges.

    ERIC Educational Resources Information Center

    Scigliano, John A.

    1983-01-01

    Presents a research-based marketing model consisting of an environmental scanning process, a series of marketing audits, and an information-processing scheme. Views the essential elements of college marketing as information flow; high-level, long-term commitment; diverse strategies; innovation; and a broad view of marketing. Includes a marketing…

  7. Incentivising effort in governance of public hospitals: Development of a delegation-based alternative to activity-based remuneration.

    PubMed

    Søgaard, Rikke; Kristensen, Søren Rud; Bech, Mickael

    2015-08-01

    This paper is a first examination of the development of an alternative to activity-based remuneration in public hospitals, which is currently being tested at nine hospital departments in a Danish region. The objective is to examine the process of delegating the authority of designing new incentive schemes from the principal (the regional government) to the agents (the hospital departments). We adopt a theoretical framework where, when deciding about delegation, the principal should trade off an initiative effect against the potential cost of loss of control. The initiative effect is evaluated by studying the development process and the resulting incentive schemes for each of the departments. Similarly, the potential cost of loss of control is evaluated by assessing the congruence between focus of the new incentive schemes and the principal's objectives. We observe a high impact of the effort incentive in the form of innovative and ambitious selection of projects by the agents, leading to nine very different solutions across departments. However, we also observe some incongruence between the principal's stated objectives and the revealed private interests of the agents. Although this is a baseline study involving high uncertainty about the future, the findings point at some issues with the delegation approach that could lead to inefficient outcomes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Comparison of the co-gasification of sewage sludge and food wastes and cost-benefit analysis of gasification- and incineration-based waste treatment schemes.

    PubMed

    You, Siming; Wang, Wei; Dai, Yanjun; Tong, Yen Wah; Wang, Chi-Hwa

    2016-10-01

    The compositions of food wastes and their co-gasification producer gas were compared with the existing data of sewage sludge. Results showed that food wastes are more favorable than sewage sludge for co-gasification based on residue generation and energy output. Two decentralized gasification-based schemes were proposed to dispose of the sewage sludge and food wastes in Singapore. Monte Carlo simulation-based cost-benefit analysis was conducted to compare the proposed schemes with the existing incineration-based scheme. It was found that the gasification-based schemes are financially superior to the incineration-based scheme based on the data of net present value (NPV), benefit-cost ratio (BCR), and internal rate of return (IRR). Sensitivity analysis was conducted to suggest effective measures to improve the economics of the schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. A neural network model for credit risk evaluation.

    PubMed

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  10. Multiresolution motion planning for autonomous agents via wavelet-based cell decompositions.

    PubMed

    Cowlagi, Raghvendra V; Tsiotras, Panagiotis

    2012-10-01

    We present a path- and motion-planning scheme that is "multiresolution" both in the sense of representing the environment with high accuracy only locally and in the sense of addressing the vehicle kinematic and dynamic constraints only locally. The proposed scheme uses rectangular multiresolution cell decompositions, efficiently generated using the wavelet transform. The wavelet transform is widely used in signal and image processing, with emerging applications in autonomous sensing and perception systems. The proposed motion planner enables the simultaneous use of the wavelet transform in both the perception and in the motion-planning layers of vehicle autonomy, thus potentially reducing online computations. We rigorously prove the completeness of the proposed path-planning scheme, and we provide numerical simulation results to illustrate its efficacy.

  11. Impact of the snow cover scheme on snow distribution and energy budget modeling over the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Xie, Zhipeng; Hu, Zeyong; Xie, Zhenghui; Jia, Binghao; Sun, Genhou; Du, Yizhen; Song, Haiqing

    2018-02-01

    This paper presents the impact of two snow cover schemes (NY07 and SL12) in the Community Land Model version 4.5 (CLM4.5) on the snow distribution and surface energy budget over the Tibetan Plateau. The simulated snow cover fraction (SCF), snow depth, and snow cover days were evaluated against in situ snow depth observations and a satellite-based snow cover product and snow depth dataset. The results show that the SL12 scheme, which considers snow accumulation and snowmelt processes separately, has a higher overall accuracy (81.8%) than the NY07 (75.8%). The newer scheme performs better in the prediction of overall accuracy compared with the NY07; however, SL12 yields a 15.1% underestimation rate while NY07 overestimated the SCF with a 15.2% overestimation rate. Both two schemes capture the distribution of the maximum snow depth well but show large positive biases in the average value through all periods (3.37, 3.15, and 1.48 cm for NY07; 3.91, 3.52, and 1.17 cm for SL12) and overestimate snow cover days compared with the satellite-based product and in situ observations. Higher altitudes show larger root-mean-square errors (RMSEs) in the simulations of snow depth and snow cover days during the snow-free period. Moreover, the surface energy flux estimations from the SL12 scheme are generally superior to the simulation from NY07 when evaluated against ground-based observations, in particular for net radiation and sensible heat flux. This study has great implications for further improvement of the subgrid-scale snow variations over the Tibetan Plateau.

  12. A new routing enhancement scheme based on node blocking state advertisement in wavelength-routed WDM networks

    NASA Astrophysics Data System (ADS)

    Hu, Peigang; Jin, Yaohui; Zhang, Chunlei; He, Hao; Hu, WeiSheng

    2005-02-01

    The increasing switching capacity brings the optical node with considerable complexity. Due to the limitation in cost and technology, an optical node is often designed with partial switching capability and partial resource sharing. It means that the node is of blocking to some extent, for example multi-granularity switching node, which in fact is a structure using pass wavelength to reduce the dimension of OXC, and partial sharing wavelength converter (WC) OXC. It is conceivable that these blocking nodes will have great effects on the problem of routing and wavelength assignment. Some previous works studied the blocking case, partial WC OXC, using complicated wavelength assignment algorithm. But the complexities of these schemes decide them to be not in practice in real networks. In this paper, we propose a new scheme based on the node blocking state advertisement to reduce the retry or rerouting probability and improve the efficiency of routing in the networks with blocking nodes. In the scheme, node blocking state are advertised to the other nodes in networks, which will be used for subsequent route calculation to find a path with lowest blocking probability. The performance of the scheme is evaluated using discrete event model in 14-node NSFNET, all the nodes of which employ a kind of partial sharing WC OXC structure. In the simulation, a simple First-Fit wavelength assignment algorithm is used. The simulation results demonstrate that the new scheme considerably reduces the retry or rerouting probability in routing process.

  13. Sensing of molecules using quantum dynamics

    DOE PAGES

    Migliore, Agostino; Naaman, Ron; Beratan, David N.

    2015-04-24

    In this study, we design sensors where information is transferred between the sensing event and the actuator via quantum relaxation processes, through distances of a few nanometers. We thus explore the possibility of sensing using intrinsically quantum mechanical phenomena that are also at play in photobiology, bioenergetics, and information processing. Specifically, we analyze schemes for sensing based on charge transfer and polarization (electronic relaxation) processes. These devices can have surprising properties. Their sensitivity can increase with increasing separation between the sites of sensing (the receptor) and the actuator (often a solid-state substrate). This counterintuitive response and other quantum features givemore » these devices favorable characteristics, such as enhanced sensitivity and selectivity. Finally, using coherent phenomena at the core of molecular sensing presents technical challenges but also suggests appealing schemes for molecular sensing and information transfer in supramolecular structures.« less

  14. Dead pixel replacement in LWIR microgrid polarimeters.

    PubMed

    Ratliff, Bradley M; Tyo, J Scott; Boger, James K; Black, Wiley T; Bowers, David L; Fetrow, Matthew P

    2007-06-11

    LWIR imaging arrays are often affected by nonresponsive pixels, or "dead pixels." These dead pixels can severely degrade the quality of imagery and often have to be replaced before subsequent image processing and display of the imagery data. For LWIR arrays that are integrated with arrays of micropolarizers, the problem of dead pixels is amplified. Conventional dead pixel replacement (DPR) strategies cannot be employed since neighboring pixels are of different polarizations. In this paper we present two DPR schemes. The first is a modified nearest-neighbor replacement method. The second is a method based on redundancy in the polarization measurements.We find that the redundancy-based DPR scheme provides an order-of-magnitude better performance for typical LWIR polarimetric data.

  15. Design of a vehicle based system to prevent ozone loss

    NASA Technical Reports Server (NTRS)

    Talbot, Matthew D.; Eby, Steven C.; Ireland, Glen J.; Mcwithey, Michael C.; Schneider, Mark S.; Youngblood, Daniel L.; Johnson, Matt; Taylor, Chris

    1994-01-01

    This project is designed to be completed over a three year period. Overall project goals are: (1) to understand the processes that contribute to stratospheric ozone loss; (2) to determine the best scheme to prevent ozone loss; and (3) to design a vehicle based system to carry out the prevention scheme. The 1993/1994 design objectives included: (1) to review the results of the 1992/1993 design team, including a reevaluation of the key assumptions used; (2) to develop a matrix of baseline vehicle concepts as candidates for the delivery vehicle; and (3) to develop a selection criteria and perform quantitative trade studies to use in the selection of the specific vehicle concept.

  16. Market behavior and performance of different strategy evaluation schemes.

    PubMed

    Baek, Yongjoo; Lee, Sang Hoon; Jeong, Hawoong

    2010-08-01

    Strategy evaluation schemes are a crucial factor in any agent-based market model, as they determine the agents' strategy preferences and consequently their behavioral pattern. This study investigates how the strategy evaluation schemes adopted by agents affect their performance in conjunction with the market circumstances. We observe the performance of three strategy evaluation schemes, the history-dependent wealth game, the trend-opposing minority game, and the trend-following majority game, in a stock market where the price is exogenously determined. The price is either directly adopted from the real stock market indices or generated with a Markov chain of order ≤2 . Each scheme's success is quantified by average wealth accumulated by the traders equipped with the scheme. The wealth game, as it learns from the history, shows relatively good performance unless the market is highly unpredictable. The majority game is successful in a trendy market dominated by long periods of sustained price increase or decrease. On the other hand, the minority game is suitable for a market with persistent zigzag price patterns. We also discuss the consequence of implementing finite memory in the scoring processes of strategies. Our findings suggest under which market circumstances each evaluation scheme is appropriate for modeling the behavior of real market traders.

  17. Sensitivity of measurement-based purification processes to inner interactions

    NASA Astrophysics Data System (ADS)

    Militello, Benedetto; Napoli, Anna

    2018-02-01

    The sensitivity of a repeated measurement-based purification scheme to additional undesired couplings is analyzed, focusing on the very simple and archetypical system consisting of two two-level systems interacting with a repeatedly measured one. Several regimes are considered and in the strong coupling limit (i.e., when the coupling constant of the undesired interaction is very large) the occurrence of a quantum Zeno effect is proven to dramatically jeopardize the efficiency of the purification process.

  18. Assessment of the turbulence parameterization schemes for the Martian mesoscale simulations

    NASA Astrophysics Data System (ADS)

    Temel, Orkun; Karatekin, Ozgur; Van Beeck, Jeroen

    2016-07-01

    Turbulent transport within the Martian atmospheric boundary layer (ABL) is one of the most important physical processes in the Martian atmosphere due to the very thin structure of Martian atmosphere and super-adiabatic conditions during the diurnal cycle [1]. The realistic modeling of turbulent fluxes within the Martian ABL has a crucial effect on the many physical phenomena including dust devils [2], methane dispersion [3] and nocturnal jets [4]. Moreover, the surface heat and mass fluxes, which are related with the mass transport within the sub-surface of Mars, are being computed by the turbulence parameterization schemes. Therefore, in addition to the possible applications within the Martian boundary layer, parameterization of turbulence has an important effect on the biological research on Mars including the investigation of water cycle or sub-surface modeling. In terms of the turbulence modeling approaches being employed for the Martian ABL, the "planetary boundary layer (PBL) schemes" have been applied not only for the global circulation modeling but also for the mesoscale simulations [5]. The PBL schemes being used for Mars are the variants of the PBL schemes which had been developed for the Earth and these schemes are either based on the empirical determination of turbulent fluxes [6] or based on solving a one dimensional turbulent kinetic energy equation [7]. Even though, the Large Eddy Simulation techniques had also been applied with the regional models for Mars, it must be noted that these advanced models also use the features of these traditional PBL schemes for sub-grid modeling [8]. Therefore, assessment of these PBL schemes is vital for a better understanding the atmospheric processes of Mars. In this framework, this present study is devoted to the validation of different turbulence modeling approaches for the Martian ABL in comparison to Viking Lander [9] and MSL [10] datasets. The GCM/Mesoscale code being used is the PlanetWRF, the extended version of WRF model for the extraterrestrial atmospheres [11]. Based on the measurements, the performances of different PBL schemes have been evaluated and some improvements have been proposed. [1] Colaïtis, A., Spiga, A., Hourdin, F., Rio, C., Forget, F., & Millour, E. (2013). A thermal plume model for the Martian convective boundary layer. Journal of Geophysical Research: Planets, 118(7), 1468-1487. [2] Balme, M., & Greeley, R. (2006). Dust devils on Earth and Mars. Reviews of Geophysics, 44(3). [3] Olsen, K. S., Cloutis, E., & Strong, K. (2012). Small-scale methane dispersion modelling for possible plume sources on the surface of Mars. Geophysical Research Letters, 39(19). [4] Savijärvi, H., & Siili, T. (1993). The Martian slope winds and the nocturnal PBL jet. Journal of the atmospheric sciences, 50(1), 77-88. [5] Fenton, L. K., Toigo, A. D., & Richardson, M. I. (2005). Aeolian processes in Proctor crater on Mars: Mesoscale modeling of dune-forming winds. Journal of Geophysical Research: Planets, 110(E6). [6] Hong, Song-You, Yign Noh, Jimy Dudhia, 2006: A new vertical diffusion package with an explicit treatment of entrainment processes. Mon. Wea. Rev., 134, 2318-2341. [7] Janjic, Zavisa I., 1994: The Step-Mountain Eta Coordinate Model: Further developments of the convection, viscous sublayer, and turbulence closure schemes. Mon. Wea. Rev., 122, 927-945. [8] Michaels, T. I., & Rafkin, S. C. (2004). Large-eddy simulation of atmospheric convection on Mars. Quarterly Journal of the Royal Meteorological Society, 130(599), 1251-1274. [9] Hess, S. L., Henry, R. M., Leovy, C. B., Ryan, J. A., & Tillman, J. E. (1977). Meteorological results from the surface of Mars: Viking 1 and 2. Journal of Geophysical Research, 82(28), 4559-4574. [10] Martínez, G. et Al. (2015). Likely frost events at Gale crater: Analysis from MSL/REMS measurements. Icarus. [11] Richardson, M. I., Toigo, A. D., & Newman, C. E. (2007). PlanetWRF: A general purpose, local to global numerical model for planetary atmospheric and climate dynamics. Journal of Geophysical Research: Planets, 112(E9).

  19. Parallel discrete-event simulation schemes with heterogeneous processing elements.

    PubMed

    Kim, Yup; Kwon, Ikhyun; Chae, Huiseung; Yook, Soon-Hyung

    2014-07-01

    To understand the effects of nonidentical processing elements (PEs) on parallel discrete-event simulation (PDES) schemes, two stochastic growth models, the restricted solid-on-solid (RSOS) model and the Family model, are investigated by simulations. The RSOS model is the model for the PDES scheme governed by the Kardar-Parisi-Zhang equation (KPZ scheme). The Family model is the model for the scheme governed by the Edwards-Wilkinson equation (EW scheme). Two kinds of distributions for nonidentical PEs are considered. In the first kind computing capacities of PEs are not much different, whereas in the second kind the capacities are extremely widespread. The KPZ scheme on the complex networks shows the synchronizability and scalability regardless of the kinds of PEs. The EW scheme never shows the synchronizability for the random configuration of PEs of the first kind. However, by regularizing the arrangement of PEs of the first kind, the EW scheme is made to show the synchronizability. In contrast, EW scheme never shows the synchronizability for any configuration of PEs of the second kind.

  20. Multiobjective hyper heuristic scheme for system design and optimization

    NASA Astrophysics Data System (ADS)

    Rafique, Amer Farhan

    2012-11-01

    As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.

  1. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  2. A summary of image segmentation techniques

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    Machine vision systems are often considered to be composed of two subsystems: low-level vision and high-level vision. Low level vision consists primarily of image processing operations performed on the input image to produce another image with more favorable characteristics. These operations may yield images with reduced noise or cause certain features of the image to be emphasized (such as edges). High-level vision includes object recognition and, at the highest level, scene interpretation. The bridge between these two subsystems is the segmentation system. Through segmentation, the enhanced input image is mapped into a description involving regions with common features which can be used by the higher level vision tasks. There is no theory on image segmentation. Instead, image segmentation techniques are basically ad hoc and differ mostly in the way they emphasize one or more of the desired properties of an ideal segmenter and in the way they balance and compromise one desired property against another. These techniques can be categorized in a number of different groups including local vs. global, parallel vs. sequential, contextual vs. noncontextual, interactive vs. automatic. In this paper, we categorize the schemes into three main groups: pixel-based, edge-based, and region-based. Pixel-based segmentation schemes classify pixels based solely on their gray levels. Edge-based schemes first detect local discontinuities (edges) and then use that information to separate the image into regions. Finally, region-based schemes start with a seed pixel (or group of pixels) and then grow or split the seed until the original image is composed of only homogeneous regions. Because there are a number of survey papers available, we will not discuss all segmentation schemes. Rather than a survey, we take the approach of a detailed overview. We focus only on the more common approaches in order to give the reader a flavor for the variety of techniques available yet present enough details to facilitate implementation and experimentation.

  3. PMD compensation in multilevel coded-modulation schemes with coherent detection using BLAST algorithm and iterative polarization cancellation.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-15

    We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.

  4. Joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-07-01

    This paper proposes a joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform. A new five-dimensional hyperchaotic system based on the Rabinovich system is presented. By means of the proposed hyperchaotic system, a new pseudorandom key stream generator is constructed. The algorithm adopts diffusion and confusion structure to perform encryption, which is based on the key stream generator and the proposed hyperchaotic system. The key sequence used for image encryption is relation to plain text. By means of the second generation curvelet transform, run-length coding, and Huffman coding, the image data are compressed. The joint operation of compression and encryption in a single process is performed. The security test results indicate the proposed methods have high security and good compression effect.

  5. Dynamic electrical impedance imaging with the interacting multiple model scheme.

    PubMed

    Kim, Kyung Youn; Kim, Bong Seok; Kim, Min Chan; Kim, Sin; Isaacson, David; Newell, Jonathan C

    2005-04-01

    In this paper, an effective dynamical EIT imaging scheme is presented for on-line monitoring of the abruptly changing resistivity distribution inside the object, based on the interacting multiple model (IMM) algorithm. The inverse problem is treated as a stochastic nonlinear state estimation problem with the time-varying resistivity (state) being estimated on-line with the aid of the IMM algorithm. In the design of the IMM algorithm multiple models with different process noise covariance are incorporated to reduce the modeling uncertainty. Simulations and phantom experiments are provided to illustrate the proposed algorithm.

  6. Report on Pairing-based Cryptography.

    PubMed

    Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily

    2015-01-01

    This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST's position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed.

  7. Report on Pairing-based Cryptography

    PubMed Central

    Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily

    2015-01-01

    This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST’s position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed. PMID:26958435

  8. Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters

    PubMed Central

    Jeong, Soowoong; Kim, Guisik; Lee, Sangkeun

    2017-01-01

    Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved. PMID:28241475

  9. Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters.

    PubMed

    Jeong, Soowoong; Kim, Guisik; Lee, Sangkeun

    2017-02-23

    Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved.

  10. Memristive device based learning for navigation in robots.

    PubMed

    Sarim, Mohammad; Kumar, Manish; Jha, Rashmi; Minai, Ali A

    2017-11-08

    Biomimetic robots have gained attention recently for various applications ranging from resource hunting to search and rescue operations during disasters. Biological species are known to intuitively learn from the environment, gather and process data, and make appropriate decisions. Such sophisticated computing capabilities in robots are difficult to achieve, especially if done in real-time with ultra-low energy consumption. Here, we present a novel memristive device based learning architecture for robots. Two terminal memristive devices with resistive switching of oxide layer are modeled in a crossbar array to develop a neuromorphic platform that can impart active real-time learning capabilities in a robot. This approach is validated by navigating a robot vehicle in an unknown environment with randomly placed obstacles. Further, the proposed scheme is compared with reinforcement learning based algorithms using local and global knowledge of the environment. The simulation as well as experimental results corroborate the validity and potential of the proposed learning scheme for robots. The results also show that our learning scheme approaches an optimal solution for some environment layouts in robot navigation.

  11. All-IP wireless sensor networks for real-time patient monitoring.

    PubMed

    Wang, Xiaonan; Le, Deguang; Cheng, Hongbin; Xie, Conghua

    2014-12-01

    This paper proposes the all-IP WSNs (wireless sensor networks) for real-time patient monitoring. In this paper, the all-IP WSN architecture based on gateway trees is proposed and the hierarchical address structure is presented. Based on this architecture, the all-IP WSN can perform routing without route discovery. Moreover, a mobile node is always identified by a home address and it does not need to be configured with a care-of address during the mobility process, so the communication disruption caused by the address change is avoided. Through the proposed scheme, a physician can monitor the vital signs of a patient at any time and at any places, and according to the IPv6 address he can also obtain the location information of the patient in order to perform effective and timely treatment. Finally, the proposed scheme is evaluated based on the simulation, and the simulation data indicate that the proposed scheme might effectively reduce the communication delay and control cost, and lower the packet loss rate. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Hardware accelerator design for change detection in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Chaudhury, Santanu; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in Human Computer Interaction. In any remote surveillance scenario, smart cameras have to take intelligent decisions to select frames of significant changes to minimize communication and processing overhead. Among many of the algorithms for change detection, one based on clustering based scheme was proposed for smart camera systems. However, such an algorithm could achieve low frame rate far from real-time requirements on a general purpose processors (like PowerPC) available on FPGAs. This paper proposes the hardware accelerator capable of detecting real time changes in a scene, which uses clustering based change detection scheme. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA board. Resulted frame rate is 30 frames per second for QVGA resolution in gray scale.

  13. Quantum anonymous voting with unweighted continuous-variable graph states

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Feng, Yanyan; Zeng, Guihua

    2016-08-01

    Motivated by the revealing topological structures of continuous-variable graph state (CVGS), we investigate the design of quantum voting scheme, which has serious advantages over the conventional ones in terms of efficiency and graphicness. Three phases are included, i.e., the preparing phase, the voting phase and the counting phase, together with three parties, i.e., the voters, the tallyman and the ballot agency. Two major voting operations are performed on the yielded CVGS in the voting process, namely the local rotation transformation and the displacement operation. The voting information is carried by the CVGS established before hand, whose persistent entanglement is deployed to keep the privacy of votes and the anonymity of legal voters. For practical applications, two CVGS-based quantum ballots, i.e., comparative ballot and anonymous survey, are specially designed, followed by the extended ballot schemes for the binary-valued and multi-valued ballots under some constraints for the voting design. Security is ensured by entanglement of the CVGS, the voting operations and the laws of quantum mechanics. The proposed schemes can be implemented using the standard off-the-shelf components when compared to discrete-variable quantum voting schemes attributing to the characteristics of the CV-based quantum cryptography.

  14. Semi-autonomous parking for enhanced safety and efficiency.

    DOT National Transportation Integrated Search

    2017-06-01

    This project focuses on the use of tools from a combination of computer vision and localization based navigation schemes to aid the process of efficient and safe parking of vehicles in high density parking spaces. The principles of collision avoidanc...

  15. Elucidation of molecular kinetic schemes from macroscopic traces using system identification

    PubMed Central

    González-Maeso, Javier; Sealfon, Stuart C.; Galocha-Iragüen, Belén; Brezina, Vladimir

    2017-01-01

    Overall cellular responses to biologically-relevant stimuli are mediated by networks of simpler lower-level processes. Although information about some of these processes can now be obtained by visualizing and recording events at the molecular level, this is still possible only in especially favorable cases. Therefore the development of methods to extract the dynamics and relationships between the different lower-level (microscopic) processes from the overall (macroscopic) response remains a crucial challenge in the understanding of many aspects of physiology. Here we have devised a hybrid computational-analytical method to accomplish this task, the SYStems-based MOLecular kinetic scheme Extractor (SYSMOLE). SYSMOLE utilizes system-identification input-output analysis to obtain a transfer function between the stimulus and the overall cellular response in the Laplace-transformed domain. It then derives a Markov-chain state molecular kinetic scheme uniquely associated with the transfer function by means of a classification procedure and an analytical step that imposes general biological constraints. We first tested SYSMOLE with synthetic data and evaluated its performance in terms of its rate of convergence to the correct molecular kinetic scheme and its robustness to noise. We then examined its performance on real experimental traces by analyzing macroscopic calcium-current traces elicited by membrane depolarization. SYSMOLE derived the correct, previously known molecular kinetic scheme describing the activation and inactivation of the underlying calcium channels and correctly identified the accepted mechanism of action of nifedipine, a calcium-channel blocker clinically used in patients with cardiovascular disease. Finally, we applied SYSMOLE to study the pharmacology of a new class of glutamate antipsychotic drugs and their crosstalk mechanism through a heteromeric complex of G protein-coupled receptors. Our results indicate that our methodology can be successfully applied to accurately derive molecular kinetic schemes from experimental macroscopic traces, and we anticipate that it may be useful in the study of a wide variety of biological systems. PMID:28192423

  16. Secure and Efficient Key Coordination Algorithm for Line Topology Network Maintenance for Use in Maritime Wireless Sensor Networks.

    PubMed

    Elgenaidi, Walid; Newe, Thomas; O'Connell, Eoin; Toal, Daniel; Dooly, Gerard

    2016-12-21

    There has been a significant increase in the proliferation and implementation of Wireless Sensor Networks (WSNs) in different disciplines, including the monitoring of maritime environments, healthcare systems, and industrial sectors. It has now become critical to address the security issues of data communication while considering sensor node constraints. There are many proposed schemes, including the scheme being proposed in this paper, to ensure that there is a high level of security in WSNs. This paper presents a symmetric security scheme for a maritime coastal environment monitoring WSN. The scheme provides security for travelling packets via individually encrypted links between authenticated neighbors, thus avoiding a reiteration of a global rekeying process. Furthermore, this scheme proposes a dynamic update key based on a trusted node configuration, called a leader node, which works as a trusted third party. The technique has been implemented in real time on a Waspmote test bed sensor platform and the results from both field testing and indoor bench testing environments are discussed in this paper.

  17. Fault-tolerant quantum computation with nondeterministic entangling gates

    NASA Astrophysics Data System (ADS)

    Auger, James M.; Anwar, Hussain; Gimeno-Segovia, Mercedes; Stace, Thomas M.; Browne, Dan E.

    2018-03-01

    Performing entangling gates between physical qubits is necessary for building a large-scale universal quantum computer, but in some physical implementations—for example, those that are based on linear optics or networks of ion traps—entangling gates can only be implemented probabilistically. In this work, we study the fault-tolerant performance of a topological cluster state scheme with local nondeterministic entanglement generation, where failed entangling gates (which correspond to bonds on the lattice representation of the cluster state) lead to a defective three-dimensional lattice with missing bonds. We present two approaches for dealing with missing bonds; the first is a nonadaptive scheme that requires no additional quantum processing, and the second is an adaptive scheme in which qubits can be measured in an alternative basis to effectively remove them from the lattice, hence eliminating their damaging effect and leading to better threshold performance. We find that a fault-tolerance threshold can still be observed with a bond-loss rate of 6.5% for the nonadaptive scheme, and a bond-loss rate as high as 14.5% for the adaptive scheme.

  18. Universal photonic quantum gates assisted by ancilla diamond nitrogen-vacancy centers coupled to resonators

    NASA Astrophysics Data System (ADS)

    Wei, Hai-Rui; Long, Gui Lu

    2015-03-01

    We propose two compact, economic, and scalable schemes for implementing optical controlled-phase-flip and controlled-controlled-phase-flip gates by using the input-output process of a single-sided cavity strongly coupled to a single nitrogen-vacancy-center defect in diamond. Additional photonic qubits, necessary for procedures based on the parity-check measurement or controlled-path and merging gates, are not employed in our schemes. In the controlled-path gate, the paths of the target photon are conditionally controlled by the control photon, and these two paths can be merged back into one by using a merging gate. Only one half-wave plate is employed in our scheme for the controlled-phase-flip gate. Compared with the conventional synthesis procedures for constructing a controlled-controlled-phase-flip gate, the cost of which is two controlled-path gates and two merging gates, or six controlled-not gates, our scheme is more compact and simpler. Our schemes could be performed with a high fidelity and high efficiency with current achievable experimental techniques.

  19. Secure and Efficient Key Coordination Algorithm for Line Topology Network Maintenance for Use in Maritime Wireless Sensor Networks

    PubMed Central

    Elgenaidi, Walid; Newe, Thomas; O’Connell, Eoin; Toal, Daniel; Dooly, Gerard

    2016-01-01

    There has been a significant increase in the proliferation and implementation of Wireless Sensor Networks (WSNs) in different disciplines, including the monitoring of maritime environments, healthcare systems, and industrial sectors. It has now become critical to address the security issues of data communication while considering sensor node constraints. There are many proposed schemes, including the scheme being proposed in this paper, to ensure that there is a high level of security in WSNs. This paper presents a symmetric security scheme for a maritime coastal environment monitoring WSN. The scheme provides security for travelling packets via individually encrypted links between authenticated neighbors, thus avoiding a reiteration of a global rekeying process. Furthermore, this scheme proposes a dynamic update key based on a trusted node configuration, called a leader node, which works as a trusted third party. The technique has been implemented in real time on a Waspmote test bed sensor platform and the results from both field testing and indoor bench testing environments are discussed in this paper. PMID:28009834

  20. A secure smart-card based authentication and key agreement scheme for telecare medicine information systems.

    PubMed

    Lee, Tian-Fu; Liu, Chuan-Ming

    2013-06-01

    A smart-card based authentication scheme for telecare medicine information systems enables patients, doctors, nurses, health visitors and the medicine information systems to establish a secure communication platform through public networks. Zhu recently presented an improved authentication scheme in order to solve the weakness of the authentication scheme of Wei et al., where the off-line password guessing attacks cannot be resisted. This investigation indicates that the improved scheme of Zhu has some faults such that the authentication scheme cannot execute correctly and is vulnerable to the attack of parallel sessions. Additionally, an enhanced authentication scheme based on the scheme of Zhu is proposed. The enhanced scheme not only avoids the weakness in the original scheme, but also provides users' anonymity and authenticated key agreements for secure data communications.

  1. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme.

    PubMed

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Learn, Mark Walter

    Sandia National Laboratories is currently developing new processing and data communication architectures for use in future satellite payloads. These architectures will leverage the flexibility and performance of state-of-the-art static-random-access-memory-based Field Programmable Gate Arrays (FPGAs). One such FPGA is the radiation-hardened version of the Virtex-5 being developed by Xilinx. However, not all features of this FPGA are being radiation-hardened by design and could still be susceptible to on-orbit upsets. One such feature is the embedded hard-core PPC440 processor. Since this processor is implemented in the FPGA as a hard-core, traditional mitigation approaches such as Triple Modular Redundancy (TMR) are not availablemore » to improve the processor's on-orbit reliability. The goal of this work is to investigate techniques that can help mitigate the embedded hard-core PPC440 processor within the Virtex-5 FPGA other than TMR. Implementing various mitigation schemes reliably within the PPC440 offers a powerful reconfigurable computing resource to these node-based processing architectures. This document summarizes the work done on the cache mitigation scheme for the embedded hard-core PPC440 processor within the Virtex-5 FPGAs, and describes in detail the design of the cache mitigation scheme and the testing conducted at the radiation effects facility on the Texas A&M campus.« less

  3. Proposal and performance analysis on the PDM microwave photonic link for the mm-wave signal with hybrid QAM-MPPM-RZ modulation

    NASA Astrophysics Data System (ADS)

    Tian, Bo; Zhang, Qi; Ma, Jianxin; Tao, Ying; Shen, Yufei; Wang, Yang; Zhang, Geng; Zhou, Wenmao; Zhao, Yi; Pan, Xiaolong

    2018-07-01

    A polarization division multiplexed (PDM) microwave photonic link for the millimeter (MM)-wave signal with hybrid modulation scheme is proposed in this paper, which is based on the combination of quadrature amplitude modulation, multi-pulse pulse-position modulation and return to zero modulation (QAM-MPPM-RZ). In this scheme, the two orthogonal polarization states enable simultaneous transmission of four data flows, which can provide different services for users according to the data rate requirement. To generate hybrid QAM-MPPM-RZ mm-wave signal, the QAM mm-wave signal is directly modulated by MPPM-RZ signal without using digital signal processing (DSP) devices, which reduces the overhead of the encoding process. Then, the generated QAM-MPPM-RZ mm-wave signal is transmitted in PDM microwave photonic link based on SSB modulation. The sparsity characteristic of QAM-MPPM-RZ not only improves the power efficiency, but also decreases the degradation caused by the fiber chromatic dispersion. The simulation results show that, under the constraint of the same transmitted data rate, the PDM microwave photonic link with 50 GHz QAM-MPPM-RZ mm-wave signal achieves much lower levels of bit-error rate than ordinary 32-QAM. In addition, the increase of laser linewidth brings no additional impact to the proposed scheme.

  4. A parallel domain decomposition-based implicit method for the Cahn–Hilliard–Cook phase-field equation in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Xiang; Yang, Chao; State Key Laboratory of Computer Science, Chinese Academy of Sciences, Beijing 100190

    2015-03-15

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracymore » (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.« less

  5. ID-based encryption scheme with revocation

    NASA Astrophysics Data System (ADS)

    Othman, Hafizul Azrie; Ismail, Eddie Shahril

    2017-04-01

    In 2015, Meshram proposed an efficient ID-based cryptographic encryption based on the difficulty of solving discrete logarithm and integer-factoring problems. The scheme was pairing free and claimed to be secure against adaptive chosen plaintext attacks (CPA). Later, Tan et al. proved that the scheme was insecure by presenting a method to recover the secret master key and to obtain prime factorization of modulo n. In this paper, we propose a new pairing-free ID-based encryption scheme with revocation based on Meshram's ID-based encryption scheme, which is also secure against Tan et al.'s attacks.

  6. A secure biometrics-based authentication scheme for telecare medicine information systems.

    PubMed

    Yan, Xiaopeng; Li, Weiheng; Li, Ping; Wang, Jiantao; Hao, Xinhong; Gong, Peng

    2013-10-01

    The telecare medicine information system (TMIS) allows patients and doctors to access medical services or medical information at remote sites. Therefore, it could bring us very big convenient. To safeguard patients' privacy, authentication schemes for the TMIS attracted wide attention. Recently, Tan proposed an efficient biometrics-based authentication scheme for the TMIS and claimed their scheme could withstand various attacks. However, in this paper, we point out that Tan's scheme is vulnerable to the Denial-of-Service attack. To enhance security, we also propose an improved scheme based on Tan's work. Security and performance analysis shows our scheme not only could overcome weakness in Tan's scheme but also has better performance.

  7. What a Joint. Youth Training Scheme. Core Exemplar Work Based Project.

    ERIC Educational Resources Information Center

    Further Education Staff Coll., Blagdon (England).

    This trainer's guide is intended to assist supervisors of work-based career training projects in helping students become familiar with meat processing--livestock at the stockyards, meat packers (wholesalers), and butcher shops--to the cooked state and to become familiar with the different joints (cuts or parts) of beef, lamb, and pork. The guide…

  8. Understanding latent structures of clinical information logistics: A bottom-up approach for model building and validating the workflow composite score.

    PubMed

    Esdar, Moritz; Hübner, Ursula; Liebe, Jan-David; Hüsers, Jens; Thye, Johannes

    2017-01-01

    Clinical information logistics is a construct that aims to describe and explain various phenomena of information provision to drive clinical processes. It can be measured by the workflow composite score, an aggregated indicator of the degree of IT support in clinical processes. This study primarily aimed to investigate the yet unknown empirical patterns constituting this construct. The second goal was to derive a data-driven weighting scheme for the constituents of the workflow composite score and to contrast this scheme with a literature based, top-down procedure. This approach should finally test the validity and robustness of the workflow composite score. Based on secondary data from 183 German hospitals, a tiered factor analytic approach (confirmatory and subsequent exploratory factor analysis) was pursued. A weighting scheme, which was based on factor loadings obtained in the analyses, was put into practice. We were able to identify five statistically significant factors of clinical information logistics that accounted for 63% of the overall variance. These factors were "flow of data and information", "mobility", "clinical decision support and patient safety", "electronic patient record" and "integration and distribution". The system of weights derived from the factor loadings resulted in values for the workflow composite score that differed only slightly from the score values that had been previously published based on a top-down approach. Our findings give insight into the internal composition of clinical information logistics both in terms of factors and weights. They also allowed us to propose a coherent model of clinical information logistics from a technical perspective that joins empirical findings with theoretical knowledge. Despite the new scheme of weights applied to the calculation of the workflow composite score, the score behaved robustly, which is yet another hint of its validity and therefore its usefulness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Post-processing for improving hyperspectral anomaly detection accuracy

    NASA Astrophysics Data System (ADS)

    Wu, Jee-Cheng; Jiang, Chi-Ming; Huang, Chen-Liang

    2015-10-01

    Anomaly detection is an important topic in the exploitation of hyperspectral data. Based on the Reed-Xiaoli (RX) detector and a morphology operator, this research proposes a novel technique for improving the accuracy of hyperspectral anomaly detection. Firstly, the RX-based detector is used to process a given input scene. Then, a post-processing scheme using morphology operator is employed to detect those pixels around high-scoring anomaly pixels. Tests were conducted using two real hyperspectral images with ground truth information and the results based on receiver operating characteristic curves, illustrated that the proposed method reduced the false alarm rates of the RXbased detector.

  10. Graphics processing unit based computation for NDE applications

    NASA Astrophysics Data System (ADS)

    Nahas, C. A.; Rajagopal, Prabhu; Balasubramaniam, Krishnan; Krishnamurthy, C. V.

    2012-05-01

    Advances in parallel processing in recent years are helping to improve the cost of numerical simulation. Breakthroughs in Graphical Processing Unit (GPU) based computation now offer the prospect of further drastic improvements. The introduction of 'compute unified device architecture' (CUDA) by NVIDIA (the global technology company based in Santa Clara, California, USA) has made programming GPUs for general purpose computing accessible to the average programmer. Here we use CUDA to develop parallel finite difference schemes as applicable to two problems of interest to NDE community, namely heat diffusion and elastic wave propagation. The implementations are for two-dimensions. Performance improvement of the GPU implementation against serial CPU implementation is then discussed.

  11. Linking payment to health outcomes: a taxonomy and examination of performance-based reimbursement schemes between healthcare payers and manufacturers.

    PubMed

    Carlson, Josh J; Sullivan, Sean D; Garrison, Louis P; Neumann, Peter J; Veenstra, David L

    2010-08-01

    To identify, categorize and examine performance-based health outcomes reimbursement schemes for medical technology. We performed a review of performance-based health outcomes reimbursement schemes over the past 10 years (7/98-010/09) using publicly available databases, web and grey literature searches, and input from healthcare reimbursement experts. We developed a taxonomy of scheme types by inductively organizing the schemes identified according to the timing, execution, and health outcomes measured in the schemes. Our search yielded 34 coverage with evidence development schemes, 10 conditional treatment continuation schemes, and 14 performance-linked reimbursement schemes. The majority of schemes are in Europe and Australia, with an increasing number in Canada and the U.S. These schemes have the potential to alter the reimbursement and pricing landscape for medical technology, but significant challenges, including high transaction costs and insufficient information systems, may limit their long-term impact. Future studies regarding experiences and outcomes of implemented schemes are necessary. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  12. An extended basis inexact shift-invert Lanczos for the efficient solution of large-scale generalized eigenproblems

    NASA Astrophysics Data System (ADS)

    Rewieński, M.; Lamecki, A.; Mrozowski, M.

    2013-09-01

    This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.

  13. Encryption of QR code and grayscale image in interference-based scheme with high quality retrieval and silhouette problem removal

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen

    2016-09-01

    In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.

  14. Active inference and learning.

    PubMed

    Friston, Karl; FitzGerald, Thomas; Rigoli, Francesco; Schwartenbeck, Philipp; O Doherty, John; Pezzulo, Giovanni

    2016-09-01

    This paper offers an active inference account of choice behaviour and learning. It focuses on the distinction between goal-directed and habitual behaviour and how they contextualise each other. We show that habits emerge naturally (and autodidactically) from sequential policy optimisation when agents are equipped with state-action policies. In active inference, behaviour has explorative (epistemic) and exploitative (pragmatic) aspects that are sensitive to ambiguity and risk respectively, where epistemic (ambiguity-resolving) behaviour enables pragmatic (reward-seeking) behaviour and the subsequent emergence of habits. Although goal-directed and habitual policies are usually associated with model-based and model-free schemes, we find the more important distinction is between belief-free and belief-based schemes. The underlying (variational) belief updating provides a comprehensive (if metaphorical) process theory for several phenomena, including the transfer of dopamine responses, reversal learning, habit formation and devaluation. Finally, we show that active inference reduces to a classical (Bellman) scheme, in the absence of ambiguity. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. A study of design approach of spreading schemes for viral marketing based on human dynamics

    NASA Astrophysics Data System (ADS)

    Yang, Jianmei; Zhuang, Dong; Xie, Weicong; Chen, Guangrong

    2013-12-01

    Before launching a real viral marketing campaign, it is needed to design a spreading scheme by simulations. Based on a categorization of spreading patterns in real world and models, we point out that the existing research (especially Yang et al. (2010) Ref. [16]) implicitly assume that if a user decides to post a received message (is activated), he/she will take the reposting action promptly (Prompt Action After Activation, or PAAA). After a careful analysis on a real dataset however, it is found that the observed time differences between action and activation exhibit a heavy-tailed distribution. A simulation model for heavy-tailed pattern is then proposed and performed. Similarities and differences of spreading processes between the heavy-tailed and PAAA patterns are analyzed. Consequently, a more practical design approach of spreading scheme for viral marketing on QQ platform is proposed. The design approach can be extended and applied to the contexts of non-heavy-tailed pattern, and viral marketing on other instant messaging platforms.

  16. The Politico-Economic Challenges of Ghana’s National Health Insurance Scheme Implementation

    PubMed Central

    Fusheini, Adam

    2016-01-01

    Background: National/social health insurance schemes have increasingly been seen in many low- and middle-income countries (LMICs) as a vehicle to universal health coverage (UHC) and a viable alternative funding mechanism for the health sector. Several countries, including Ghana, have thus introduced and implemented mandatory national health insurance schemes (NHIS) as part of reform efforts towards increasing access to health services. Ghana passed mandatory national health insurance (NHI) legislation (ACT 650) in 2003 and commenced nationwide implementation in 2004. Several peer review studies and other research reports have since assessed the performance of the scheme with positive rating while challenges also noted. This paper contributes to the literature on economic and political implementation challenges based on empirical evidence from the perspectives of the different category of actors and institutions involved in the process. Methods: Qualitative in-depth interviews were held with 33 different category of participants in four selected district mutual health insurance schemes in Southern (two) and Northern (two) Ghana. This was to ascertain their views regarding the main challenges in the implementation process. The participants were selected through purposeful sampling, stakeholder mapping, and snowballing. Data was analysed using thematic grouping procedure. Results: Participants identified political issues of over politicisation and political interference as main challenges. The main economic issues participants identified included low premiums or contributions; broad exemptions, poor gatekeeper enforcement system; and culture of curative and hospital-centric care. Conclusion: The study establishes that political and economic factors have influenced the implementation process and the degree to which the policy has been implemented as intended. Thus, we conclude that there is a synergy between implementation and politics; and achieving UHC under the NHIS requires political stewardship. Political leadership has the responsibility to build trust and confidence in the system by providing the necessary resources and backing with minimal interference in the operations. For sustainability of the scheme, authorities need to review the exemption policy, rate of contributions, especially, from informal sector employees and recruitment criteria of scheme workers, explore additional sources of funding and re-examine training needs of employees to strengthen their competences among others. PMID:27694681

  17. Contrastive Analysis of Meteorological Element Effect Simulated by parameterization schemes Land Surface Process of Noah and CLM4 over the Yellow River Source Region

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Wen, X.

    2017-12-01

    The Yellow River source region is situated in the northeast Tibetan Plateau, which is considered as a global climate change hot-spot and one of the most sensitive areas in terms of response to global warming in view of its fragile ecosystem. This region plays an irreplaceable role for downstream water supply of The Yellow River because of its unique topography and variable climate. The water energy cycle processes of the Yellow River source Region from July to September in 2015 were simulated by using the WRF mesoscale numerical model. The two groups respectively used Noah and CLM4 parameterization schemes of land surface process. Based on the observation data of GLDAS data set, ground automatic weather station and Zoige plateau wetland ecosystem research station, the simulated values of near surface meteorological elements and surface energy parameters of two different schemes were compared. The results showed that the daily variations about meteorological factors in Zoige station in September were simulated quite well by the model. The correlation coefficient between the simulated temperature and humidity of the CLM scheme were 0.88 and 0.83, the RMSE were 1.94 ° and 9.97%, and the deviation Bias were 0.04 ° and 3.30%, which was closer to the observation data than the Noah scheme. The correlation coefficients of net radiation, surface heat flux, upward short wave and upward longwave radiation were respectively 0.86, 0.81, 0.84 and 0.88, which corresponded better than the observation data. The sensible heat flux and latent heat flux distribution of the Noah scheme corresponded quite well to GLDAS. the distribution and magnitude of 2m relative humidity and soil moisture were closer to surface observation data because the CLM scheme described the photosynthesis and evapotranspiration of land surface vegetation more rationally. The simulating abilities of precipitation and downward longwave radiation need to be improved. This study provides a theoretical basis for the numerical simulation of water energy cycle in the source region over the Yellow River basin.

  18. The Politico-Economic Challenges of Ghana's National Health Insurance Scheme Implementation.

    PubMed

    Fusheini, Adam

    2016-04-27

    National/social health insurance schemes have increasingly been seen in many low- and middle-income countries (LMICs) as a vehicle to universal health coverage (UHC) and a viable alternative funding mechanism for the health sector. Several countries, including Ghana, have thus introduced and implemented mandatory national health insurance schemes (NHIS) as part of reform efforts towards increasing access to health services. Ghana passed mandatory national health insurance (NHI) legislation (ACT 650) in 2003 and commenced nationwide implementation in 2004. Several peer review studies and other research reports have since assessed the performance of the scheme with positive rating while challenges also noted. This paper contributes to the literature on economic and political implementation challenges based on empirical evidence from the perspectives of the different category of actors and institutions involved in the process. Qualitative in-depth interviews were held with 33 different category of participants in four selected district mutual health insurance schemes in Southern (two) and Northern (two) Ghana. This was to ascertain their views regarding the main challenges in the implementation process. The participants were selected through purposeful sampling, stakeholder mapping, and snowballing. Data was analysed using thematic grouping procedure. Participants identified political issues of over politicisation and political interference as main challenges. The main economic issues participants identified included low premiums or contributions; broad exemptions, poor gatekeeper enforcement system; and culture of curative and hospital-centric care. The study establishes that political and economic factors have influenced the implementation process and the degree to which the policy has been implemented as intended. Thus, we conclude that there is a synergy between implementation and politics; and achieving UHC under the NHIS requires political stewardship. Political leadership has the responsibility to build trust and confidence in the system by providing the necessary resources and backing with minimal interference in the operations. For sustainability of the scheme, authorities need to review the exemption policy, rate of contributions, especially, from informal sector employees and recruitment criteria of scheme workers, explore additional sources of funding and re-examine training needs of employees to strengthen their competences among others. © 2016 by Kerman University of Medical Sciences

  19. Design of an extensive information representation scheme for clinical narratives.

    PubMed

    Deléger, Louise; Campillos, Leonardo; Ligozat, Anne-Laure; Névéol, Aurélie

    2017-09-11

    Knowledge representation frameworks are essential to the understanding of complex biomedical processes, and to the analysis of biomedical texts that describe them. Combined with natural language processing (NLP), they have the potential to contribute to retrospective studies by unlocking important phenotyping information contained in the narrative content of electronic health records (EHRs). This work aims to develop an extensive information representation scheme for clinical information contained in EHR narratives, and to support secondary use of EHR narrative data to answer clinical questions. We review recent work that proposed information representation schemes and applied them to the analysis of clinical narratives. We then propose a unifying scheme that supports the extraction of information to address a large variety of clinical questions. We devised a new information representation scheme for clinical narratives that comprises 13 entities, 11 attributes and 37 relations. The associated annotation guidelines can be used to consistently apply the scheme to clinical narratives and are https://cabernet.limsi.fr/annotation_guide_for_the_merlot_french_clinical_corpus-Sept2016.pdf . The information scheme includes many elements of the major schemes described in the clinical natural language processing literature, as well as a uniquely detailed set of relations.

  20. Applying super-droplets as a compact representation of warm-rain microphysics for aerosol-cloud-aerosol interactions

    NASA Astrophysics Data System (ADS)

    Arabas, S.; Jaruga, A.; Pawlowska, H.; Grabowski, W. W.

    2012-12-01

    Clouds may influence aerosol characteristics of their environment. The relevant processes include wet deposition (rainout or washout) and cloud condensation nuclei (CCN) recycling through evaporation of cloud droplets and drizzle drops. Recycled CCN physicochemical properties may be altered if the evaporated droplets go through collisional growth or irreversible chemical reactions (e.g. SO2 oxidation). The key challenge of representing these processes in a numerical cloud model stems from the need to track properties of activated CCN throughout the cloud lifecycle. Lack of such "memory" characterises the so-called bulk, multi-moment as well as bin representations of cloud microphysics. In this study we apply the particle-based scheme of Shima et al. 2009. Each modelled particle (aka super-droplet) is a numerical proxy for a multiplicity of real-world CCN, cloud, drizzle or rain particles of the same size, nucleus type,and position. Tracking cloud nucleus properties is an inherent feature of the particle-based frameworks, making them suitable for studying aerosol-cloud-aerosol interactions. The super-droplet scheme is furthermore characterized by linear scalability in the number of computational particles, and no numerical diffusion in the condensational and in the Monte-Carlo type collisional growth schemes. The presentation will focus on processing of aerosol by a drizzling stratocumulus deck. The simulations are carried out using a 2D kinematic framework and a VOCALS experiment inspired set-up (see http://www.rap.ucar.edu/~gthompsn/workshop2012/case1/).

  1. The Joint UK Land Environment Simulator (JULES), model description - Part 2: Carbon fluxes and vegetation dynamics

    NASA Astrophysics Data System (ADS)

    Clark, D. B.; Mercado, L. M.; Sitch, S.; Jones, C. D.; Gedney, N.; Best, M. J.; Pryor, M.; Rooney, G. G.; Essery, R. L. H.; Blyth, E.; Boucher, O.; Harding, R. J.; Huntingford, C.; Cox, P. M.

    2011-09-01

    The Joint UK Land Environment Simulator (JULES) is a process-based model that simulates the fluxes of carbon, water, energy and momentum between the land surface and the atmosphere. Many studies have demonstrated the important role of the land surface in the functioning of the Earth System. Different versions of JULES have been employed to quantify the effects on the land carbon sink of climate change, increasing atmospheric carbon dioxide concentrations, changing atmospheric aerosols and tropospheric ozone, and the response of methane emissions from wetlands to climate change. This paper describes the consolidation of these advances in the modelling of carbon fluxes and stores, in both the vegetation and soil, in version 2.2 of JULES. Features include a multi-layer canopy scheme for light interception, including a sunfleck penetration scheme, a coupled scheme of leaf photosynthesis and stomatal conductance, representation of the effects of ozone on leaf physiology, and a description of methane emissions from wetlands. JULES represents the carbon allocation, growth and population dynamics of five plant functional types. The turnover of carbon from living plant tissues is fed into a 4-pool soil carbon model. The process-based descriptions of key ecological processes and trace gas fluxes in JULES mean that this community model is well-suited for use in carbon cycle, climate change and impacts studies, either in standalone mode or as the land component of a coupled Earth system model.

  2. Building an authorization model for external means of protection of APCS based on the Internet of things

    NASA Astrophysics Data System (ADS)

    Zaharov, A. A.; Nissenbaum, O. V.; Ponomaryov, K. Y.; Nesgovorov, E. S.

    2018-01-01

    In this paper we study application of Internet of Thing concept and devices to secure automated process control systems. We review different approaches in IoT (Internet of Things) architecture and design and propose them for several applications in security of automated process control systems. We consider an Attribute-based encryption in context of access control mechanism implementation and promote a secret key distribution scheme between attribute authorities and end devices.

  3. Self-Organizing Map Neural Network-Based Nearest Neighbor Position Estimation Scheme for Continuous Crystal PET Detectors

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Li, Deng; Lu, Xiaoming; Cheng, Xinyi; Wang, Liwei

    2014-10-01

    Continuous crystal-based positron emission tomography (PET) detectors could be an ideal alternative for current high-resolution pixelated PET detectors if the issues of high performance γ interaction position estimation and its real-time implementation are solved. Unfortunately, existing position estimators are not very feasible for implementation on field-programmable gate array (FPGA). In this paper, we propose a new self-organizing map neural network-based nearest neighbor (SOM-NN) positioning scheme aiming not only at providing high performance, but also at being realistic for FPGA implementation. Benefitting from the SOM feature mapping mechanism, the large set of input reference events at each calibration position is approximated by a small set of prototypes, and the computation of the nearest neighbor searching for unknown events is largely reduced. Using our experimental data, the scheme was evaluated, optimized and compared with the smoothed k-NN method. The spatial resolutions of full-width-at-half-maximum (FWHM) of both methods averaged over the center axis of the detector were obtained as 1.87 ±0.17 mm and 1.92 ±0.09 mm, respectively. The test results show that the SOM-NN scheme has an equivalent positioning performance with the smoothed k-NN method, but the amount of computation is only about one-tenth of the smoothed k-NN method. In addition, the algorithm structure of the SOM-NN scheme is more feasible for implementation on FPGA. It has the potential to realize real-time position estimation on an FPGA with a high-event processing throughput.

  4. Understanding security failures of two authentication and key agreement schemes for telecare medicine information systems.

    PubMed

    Mishra, Dheerendra

    2015-03-01

    Smart card based authentication and key agreement schemes for telecare medicine information systems (TMIS) enable doctors, nurses, patients and health visitors to use smart cards for secure login to medical information systems. In recent years, several authentication and key agreement schemes have been proposed to present secure and efficient solution for TMIS. Most of the existing authentication schemes for TMIS have either higher computation overhead or are vulnerable to attacks. To reduce the computational overhead and enhance the security, Lee recently proposed an authentication and key agreement scheme using chaotic maps for TMIS. Xu et al. also proposed a password based authentication and key agreement scheme for TMIS using elliptic curve cryptography. Both the schemes provide better efficiency from the conventional public key cryptography based schemes. These schemes are important as they present an efficient solution for TMIS. We analyze the security of both Lee's scheme and Xu et al.'s schemes. Unfortunately, we identify that both the schemes are vulnerable to denial of service attack. To understand the security failures of these cryptographic schemes which are the key of patching existing schemes and designing future schemes, we demonstrate the security loopholes of Lee's scheme and Xu et al.'s scheme in this paper.

  5. Robust and efficient biometrics based password authentication scheme for telecare medicine information systems using extended chaotic maps.

    PubMed

    Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Xie, Dong; Yang, Yixian

    2015-06-01

    The Telecare Medicine Information Systems (TMISs) provide an efficient communicating platform supporting the patients access health-care delivery services via internet or mobile networks. Authentication becomes an essential need when a remote patient logins into the telecare server. Recently, many extended chaotic maps based authentication schemes using smart cards for TMISs have been proposed. Li et al. proposed a secure smart cards based authentication scheme for TMISs using extended chaotic maps based on Lee's and Jiang et al.'s scheme. In this study, we show that Li et al.'s scheme has still some weaknesses such as violation the session key security, vulnerability to user impersonation attack and lack of local verification. To conquer these flaws, we propose a chaotic maps and smart cards based password authentication scheme by applying biometrics technique and hash function operations. Through the informal and formal security analyses, we demonstrate that our scheme is resilient possible known attacks including the attacks found in Li et al.'s scheme. As compared with the previous authentication schemes, the proposed scheme is more secure and efficient and hence more practical for telemedical environments.

  6. Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps.

    PubMed

    Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han; Lin, Tsung-Hung

    2017-01-01

    A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes.

  7. Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps

    PubMed Central

    Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han

    2017-01-01

    A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes. PMID:28759615

  8. Evaluation of a new microphysical aerosol module in the ECMWF Integrated Forecasting System

    NASA Astrophysics Data System (ADS)

    Woodhouse, Matthew; Mann, Graham; Carslaw, Ken; Morcrette, Jean-Jacques; Schulz, Michael; Kinne, Stefan; Boucher, Olivier

    2013-04-01

    The Monitoring Atmospheric Composition and Climate II (MACC-II) project will provide a system for monitoring and predicting atmospheric composition. As part of the first phase of MACC, the GLOMAP-mode microphysical aerosol scheme (Mann et al., 2010, GMD) was incorporated within the ECMWF Integrated Forecasting System (IFS). The two-moment modal GLOMAP-mode scheme includes new particle formation, condensation, coagulation, cloud-processing, and wet and dry deposition. GLOMAP-mode is already incorporated as a module within the TOMCAT chemistry transport model and within the UK Met Office HadGEM3 general circulation model. The microphysical, process-based GLOMAP-mode scheme allows an improved representation of aerosol size and composition and can simulate aerosol evolution in the troposphere and stratosphere. The new aerosol forecasting and re-analysis system (known as IFS-GLOMAP) will also provide improved boundary conditions for regional air quality forecasts, and will benefit from assimilation of observed aerosol optical depths in near real time. Presented here is an evaluation of the performance of the IFS-GLOMAP system in comparison to in situ aerosol mass and number measurements, and remotely-sensed aerosol optical depth measurements. Future development will provide a fully-coupled chemistry-aerosol scheme, and the capability to resolve nitrate aerosol.

  9. CMOS time-to-digital converter based on a pulse-mixing scheme

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Chi; Hwang, Chorng-Sii; Liu, Keng-Chih; Chen, Guan-Hong

    2014-11-01

    This paper proposes a new pulse-mixing scheme utilizing both pulse-shrinking and pulse-stretching mechanisms to improve the performance of time-to-digital converters (TDCs). The temporal resolution of the conventional pulse-shrinking mechanism is determined by the size ratio between homogeneous and inhomogeneous elements. The proposed scheme which features double-stage operation derives its resolution according to the time difference between pulse-shrinking and pulse-stretching amounts. Thus, it can achieve greater immunity against temperature and ambient variations than that of the single-stage scheme. The circuit area also can be reduced by the proposed pulse-mixing scheme. In addition, this study proposes an improved cyclic delay line to eliminate the undesirable shift in the temporal resolution successfully. Therefore, the effective resolution can be controlled completely by the pulse-mixing unit to improve accuracy. The proposed TDC composed of only one cyclic delay line and one counter is fabricated in a TSMC CMOS 0.35-μm DPQM process. The chip core occupies an extremely small area of 0.02 mm2, which is the best among the related works. The experimental result shows that an effective resolution of around 53 ps within ±13% variation over a 0-100 °C temperature range is achieved. The power consumption is 90 μW at a sample rate of 1000 samples/s. In addition to the reduced area, the proposed TDC circuit achieves its resolution with less thermal-sensitivity and better fluctuations caused by process variations.

  10. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme

    PubMed Central

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.’s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks. PMID:26866606

  11. Investigation of transient ignition process in a cavity based scramjet combustor using combined ethylene injectors

    NASA Astrophysics Data System (ADS)

    Liu, Xiao; Cai, Zun; Tong, Yiheng; Zheng, Hongtao

    2017-08-01

    Large Eddy Simulation (LES) and experiment were employed to investigate the transient ignition and flame propagation process in a rearwall-expansion cavity scramjet combustor using combined fuel injection schemes. The compressible supersonic solver and three ethylene combustion mechanisms were first validated against experimental data and results show in reasonably good agreement. Fuel injection scheme combining transverse and direct injectors in the cavity provides a benefit mixture distribution and could achieve a successful ignition. Four stages are illustrated in detail from both experiment and LES. After forced ignition in the cavity, initial flame kernel propagates upstream towards the cavity front edge and ignites the mixture, which acts as a continuous pilot flame, and then propagates downstream along the cavity shear layer rapidly to the combustor exit. Cavity shear layer flame stabilization mode can be concluded from the heat release rate and local high temperature distribution during the combustion process.

  12. Simple aerosol correction technique based on the spectral relationships of the aerosol multiple-scattering reflectances for atmospheric correction over the oceans.

    PubMed

    Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram

    2016-12-26

    An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.

  13. Challenges and Opportunities: One Stop Processing of Automatic Large-Scale Base Map Production Using Airborne LIDAR Data Within GIS Environment. Case Study: Makassar City, Indonesia

    NASA Astrophysics Data System (ADS)

    Widyaningrum, E.; Gorte, B. G. H.

    2017-05-01

    LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information Agency in Indonesia. As a progressive advanced technology, Geographic Information System (GIS) open possibilities to deal with geospatial data automatic processing and analyses. Considering further needs of spatial data sharing and integration, the one stop processing of LiDAR data in a GIS environment is considered a powerful and efficient approach for the base map provision. The quality of the automated topographic base map is assessed and analysed based on its completeness, correctness, quality, and the confusion matrix.

  14. Scheme dependence and transverse momentum distribution interpretation of Collins-Soper-Sterman resummation

    DOE PAGES

    Prokudin, Alexei; Sun, Peng; Yuan, Feng

    2015-10-01

    Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper- Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. Thus, we further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are remarkablymore » consistent with each other and with that of the standard CSS formalism.« less

  15. Scheme dependence and transverse momentum distribution interpretation of Collins-Soper-Sterman resummation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prokudin, Alexei; Sun, Peng; Yuan, Feng

    Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper- Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. Thus, we further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are remarkablymore » consistent with each other and with that of the standard CSS formalism.« less

  16. Scheme dependence and transverse momentum distribution interpretation of Collins-Soper-Sterman resummation

    NASA Astrophysics Data System (ADS)

    Prokudin, Alexei; Sun, Peng; Yuan, Feng

    2015-11-01

    Following an earlier derivation by Catani, de Florian and Grazzini (2000) on the scheme dependence in the Collins-Soper-Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. We further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are consistent with each other and with that of the standard CSS formalism.

  17. OpenACC acceleration of an unstructured CFD solver based on a reconstructed discontinuous Galerkin method for compressible flows

    DOE PAGES

    Xia, Yidong; Lou, Jialin; Luo, Hong; ...

    2015-02-09

    Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementationmore » of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.« less

  18. Smartphone-Based Patients' Activity Recognition by Using a Self-Learning Scheme for Medical Monitoring.

    PubMed

    Guo, Junqi; Zhou, Xi; Sun, Yunchuan; Ping, Gong; Zhao, Guoxing; Li, Zhuorong

    2016-06-01

    Smartphone based activity recognition has recently received remarkable attention in various applications of mobile health such as safety monitoring, fitness tracking, and disease prediction. To achieve more accurate and simplified medical monitoring, this paper proposes a self-learning scheme for patients' activity recognition, in which a patient only needs to carry an ordinary smartphone that contains common motion sensors. After the real-time data collection though this smartphone, we preprocess the data using coordinate system transformation to eliminate phone orientation influence. A set of robust and effective features are then extracted from the preprocessed data. Because a patient may inevitably perform various unpredictable activities that have no apriori knowledge in the training dataset, we propose a self-learning activity recognition scheme. The scheme determines whether there are apriori training samples and labeled categories in training pools that well match with unpredictable activity data. If not, it automatically assembles these unpredictable samples into different clusters and gives them new category labels. These clustered samples combined with the acquired new category labels are then merged into the training dataset to reinforce recognition ability of the self-learning model. In experiments, we evaluate our scheme using the data collected from two postoperative patient volunteers, including six labeled daily activities as the initial apriori categories in the training pool. Experimental results demonstrate that the proposed self-learning scheme for activity recognition works very well for most cases. When there exist several types of unseen activities without any apriori information, the accuracy reaches above 80 % after the self-learning process converges.

  19. A Hierarchical Z-Scheme α-Fe2 O3 /g-C3 N4 Hybrid for Enhanced Photocatalytic CO2 Reduction.

    PubMed

    Jiang, Zhifeng; Wan, Weiming; Li, Huaming; Yuan, Shouqi; Zhao, Huijun; Wong, Po Keung

    2018-03-01

    The challenge in the artificial photosynthesis of fossil resources from CO 2 by utilizing solar energy is to achieve stable photocatalysts with effective CO 2 adsorption capacity and high charge-separation efficiency. A hierarchical direct Z-scheme system consisting of urchin-like hematite and carbon nitride provides an enhanced photocatalytic activity of reduction of CO 2 to CO, yielding a CO evolution rate of 27.2 µmol g -1 h -1 without cocatalyst and sacrifice reagent, which is >2.2 times higher than that produced by g-C 3 N 4 alone (10.3 µmol g -1 h -1 ). The enhanced photocatalytic activity of the Z-scheme hybrid material can be ascribed to its unique characteristics to accelerate the reduction process, including: (i) 3D hierarchical structure of urchin-like hematite and preferable basic sites which promotes the CO 2 adsorption, and (ii) the unique Z-scheme feature efficiently promotes the separation of the electron-hole pairs and enhances the reducibility of electrons in the conduction band of the g-C 3 N 4 . The origin of such an obvious advantage of the hierarchical Z-scheme is not only explained based on the experimental data but also investigated by modeling CO 2 adsorption and CO adsorption on the three different atomic-scale surfaces via density functional theory calculation. The study creates new opportunities for hierarchical hematite and other metal-oxide-based Z-scheme system for solar fuel generation. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. An efficient and provable secure revocable identity-based encryption scheme.

    PubMed

    Wang, Changji; Li, Yuan; Xia, Xiaonan; Zheng, Kangjia

    2014-01-01

    Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE) has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.

  1. A provably-secure ECC-based authentication scheme for wireless sensor networks.

    PubMed

    Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho

    2014-11-06

    A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes.

  2. A Provably-Secure ECC-Based Authentication Scheme for Wireless Sensor Networks

    PubMed Central

    Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho

    2014-01-01

    A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes. PMID:25384009

  3. A soft-hard combination-based cooperative spectrum sensing scheme for cognitive radio networks.

    PubMed

    Do, Nhu Tri; An, Beongku

    2015-02-13

    In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT)-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of -15 dB). In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.

  4. Distributed Efficient Similarity Search Mechanism in Wireless Sensor Networks

    PubMed Central

    Ahmed, Khandakar; Gregory, Mark A.

    2015-01-01

    The Wireless Sensor Network similarity search problem has received considerable research attention due to sensor hardware imprecision and environmental parameter variations. Most of the state-of-the-art distributed data centric storage (DCS) schemes lack optimization for similarity queries of events. In this paper, a DCS scheme with metric based similarity searching (DCSMSS) is proposed. DCSMSS takes motivation from vector distance index, called iDistance, in order to transform the issue of similarity searching into the problem of an interval search in one dimension. In addition, a sector based distance routing algorithm is used to efficiently route messages. Extensive simulation results reveal that DCSMSS is highly efficient and significantly outperforms previous approaches in processing similarity search queries. PMID:25751081

  5. Numerical investigations of non-collinear optical parametric chirped pulse amplification for Laguerre-Gaussian vortex beam

    NASA Astrophysics Data System (ADS)

    Xu, Lu; Yu, Lianghong; Liang, Xiaoyan

    2016-04-01

    We present for the first time a scheme to amplify a Laguerre-Gaussian vortex beam based on non-collinear optical parametric chirped pulse amplification (OPCPA). In addition, a three-dimensional numerical model of non-collinear optical parametric amplification was deduced in the frequency domain, in which the effects of non-collinear configuration, temporal and spatial walk-off, group-velocity dispersion and diffraction were also taken into account, to trace the dynamics of the Laguerre-Gaussian vortex beam and investigate its critical parameters in the non-collinear OPCPA process. Based on the numerical simulation results, the scheme shows promise for implementation in a relativistic twisted laser pulse system, which will diversify the light-matter interaction field.

  6. Efficient bit sifting scheme of post-processing in quantum key distribution

    NASA Astrophysics Data System (ADS)

    Li, Qiong; Le, Dan; Wu, Xianyan; Niu, Xiamu; Guo, Hong

    2015-10-01

    Bit sifting is an important step in the post-processing of quantum key distribution (QKD). Its function is to sift out the undetected original keys. The communication traffic of bit sifting has essential impact on the net secure key rate of a practical QKD system. In this paper, an efficient bit sifting scheme is presented, of which the core is a lossless source coding algorithm. Both theoretical analysis and experimental results demonstrate that the performance of the scheme is approaching the Shannon limit. The proposed scheme can greatly decrease the communication traffic of the post-processing of a QKD system, which means the proposed scheme can decrease the secure key consumption for classical channel authentication and increase the net secure key rate of the QKD system, as demonstrated by analyzing the improvement on the net secure key rate. Meanwhile, some recommendations on the application of the proposed scheme to some representative practical QKD systems are also provided.

  7. Two moment dust and water ice in the MarsWRF GCM

    NASA Astrophysics Data System (ADS)

    Lee, Christopher; Richardson, Mark I.; Newman, Claire E.; Mischna, Michael A.

    2016-10-01

    A new two moment dust and water ice microphysics scheme has been developed for the MarsWRF General Circulation Model based on the Morrison and Gettelman (2008) scheme, and includes temperature dependent nucleation processes and energetically constrained condensation and evaporation. Dust consumed in the formation of water ice is also tracked by the model.The two moment dust scheme simulates dust particles in the Martian atmosphere using a Gamma distribution with fixed radius for lifted particles. Within the atmosphere the particle distribution is advected and sedimented within the two moment framework, obviating the requirement for lossy conversion between the continuous Gamma distribution and discritized bins found in some Mars microphysics schemes. Water ice is simulated using the same Gamma distribution and advected and sedimented in the same way. Water ice nucleation occurs heterogeneously onto dust particles with temperature dependent contact parameters (e.g. Trainer et al., 2009) and condensation and evaporation follows energetic constraints (e.g. Pruppacher and Klett, 1980; Montmessin et al., 2002) allowing water ice particles to grow in size where necessary. Dust particles are tracked within the ice cores as nucleation occurs, and dust cores advect and sediment along with their parent ice particle distributions. Radiative properties of dust and water particles are calculated as a function of the effective radius of the particles and the distribution width. The new microphysics scheme requires 5 tracers to be tracked as the moments of the dust, water ice, and ice core. All microphysical processes are simulated entirely within the two moment framework without any discretization of particle sizes.The effect of this new microphysics scheme on dust and water ice cloud distribution will be discussed and compared with observations from TES and MCS.

  8. Cross stratum resources protection in fog-computing-based radio over fiber networks for 5G services

    NASA Astrophysics Data System (ADS)

    Guo, Shaoyong; Shao, Sujie; Wang, Yao; Yang, Hui

    2017-09-01

    In order to meet the requirement of internet of things (IoT) and 5G, the cloud radio access network is a paradigm which converges all base stations computational resources into a cloud baseband unit (BBU) pool, while the distributed radio frequency signals are collected by remote radio head (RRH). A precondition for centralized processing in the BBU pool is an interconnection fronthaul network with high capacity and low delay. However, it has become more complex and frequent in the interaction between RRH and BBU and resource scheduling among BBUs in cloud. Cloud radio over fiber network has been proposed in our previous work already. In order to overcome the complexity and latency, in this paper, we first present a novel cross stratum resources protection (CSRP) architecture in fog-computing-based radio over fiber networks (F-RoFN) for 5G services. Additionally, a cross stratum protection (CSP) scheme considering the network survivability is introduced in the proposed architecture. The CSRP with CSP scheme can effectively pull the remote processing resource locally to implement the cooperative radio resource management, enhance the responsiveness and resilience to the dynamic end-to-end 5G service demands, and globally optimize optical network, wireless and fog resources. The feasibility and efficiency of the proposed architecture with CSP scheme are verified on our software defined networking testbed in terms of service latency, transmission success rate, resource occupation rate and blocking probability.

  9. A Novel Passive Tracking Scheme Exploiting Geometric and Intercept Theorems

    PubMed Central

    Zhou, Biao; Sun, Chao; Ahn, Deockhyeon; Kim, Youngok

    2018-01-01

    Passive tracking aims to track targets without assistant devices, that is, device-free targets. Passive tracking based on Radio Frequency (RF) Tomography in wireless sensor networks has recently been addressed as an emerging field. The passive tracking scheme using geometric theorems (GTs) is one of the most popular RF Tomography schemes, because the GT-based method can effectively mitigate the demand for a high density of wireless nodes. In the GT-based tracking scheme, the tracking scenario is considered as a two-dimensional geometric topology and then geometric theorems are applied to estimate crossing points (CPs) of the device-free target on line-of-sight links (LOSLs), which reveal the target’s trajectory information in a discrete form. In this paper, we review existing GT-based tracking schemes, and then propose a novel passive tracking scheme by exploiting the Intercept Theorem (IT). To create an IT-based CP estimation scheme available in the noisy non-parallel LOSL situation, we develop the equal-ratio traverse (ERT) method. Finally, we analyze properties of three GT-based tracking algorithms and the performance of these schemes is evaluated experimentally under various trajectories, node densities, and noisy topologies. Analysis of experimental results shows that tracking schemes exploiting geometric theorems can achieve remarkable positioning accuracy even under rather a low density of wireless nodes. Moreover, the proposed IT scheme can provide generally finer tracking accuracy under even lower node density and noisier topologies, in comparison to other schemes. PMID:29562621

  10. Fast packet switching algorithms for dynamic resource control over ATM networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsang, R.P.; Keattihananant, P.; Chang, T.

    1996-12-01

    Real-time continuous media traffic, such as digital video and audio, is expected to comprise a large percentage of the network load on future high speed packet switch networks such as ATM. A major feature which distinguishes high speed networks from traditional slower speed networks is the large amount of data the network must process very quickly. For efficient network usage, traffic control mechanisms are essential. Currently, most mechanisms for traffic control (such as flow control) have centered on the support of Available Bit Rate (ABR), i.e., non real-time, traffic. With regard to ATM, for ABR traffic, two major types ofmore » schemes which have been proposed are rate- control and credit-control schemes. Neither of these schemes are directly applicable to Real-time Variable Bit Rate (VBR) traffic such as continuous media traffic. Traffic control for continuous media traffic is an inherently difficult problem due to the time- sensitive nature of the traffic and its unpredictable burstiness. In this study, we present a scheme which controls traffic by dynamically allocating/de- allocating resources among competing VCs based upon their real-time requirements. This scheme incorporates a form of rate- control, real-time burst-level scheduling and link-link flow control. We show analytically potential performance improvements of our rate- control scheme and present a scheme for buffer dimensioning. We also present simulation results of our schemes and discuss the tradeoffs inherent in maintaining high network utilization and statistically guaranteeing many users` Quality of Service.« less

  11. Intercomparison of Martian Lower Atmosphere Simulated Using Different Planetary Boundary Layer Parameterization Schemes

    NASA Technical Reports Server (NTRS)

    Natarajan, Murali; Fairlie, T. Duncan; Dwyer Cianciolo, Alicia; Smith, Michael D.

    2015-01-01

    We use the mesoscale modeling capability of Mars Weather Research and Forecasting (MarsWRF) model to study the sensitivity of the simulated Martian lower atmosphere to differences in the parameterization of the planetary boundary layer (PBL). Characterization of the Martian atmosphere and realistic representation of processes such as mixing of tracers like dust depend on how well the model reproduces the evolution of the PBL structure. MarsWRF is based on the NCAR WRF model and it retains some of the PBL schemes available in the earth version. Published studies have examined the performance of different PBL schemes in NCAR WRF with the help of observations. Currently such assessments are not feasible for Martian atmospheric models due to lack of observations. It is of interest though to study the sensitivity of the model to PBL parameterization. Typically, for standard Martian atmospheric simulations, we have used the Medium Range Forecast (MRF) PBL scheme, which considers a correction term to the vertical gradients to incorporate nonlocal effects. For this study, we have also used two other parameterizations, a non-local closure scheme called Yonsei University (YSU) PBL scheme and a turbulent kinetic energy closure scheme called Mellor- Yamada-Janjic (MYJ) PBL scheme. We will present intercomparisons of the near surface temperature profiles, boundary layer heights, and wind obtained from the different simulations. We plan to use available temperature observations from Mini TES instrument onboard the rovers Spirit and Opportunity in evaluating the model results.

  12. Equivalent ZF precoding scheme for downlink indoor MU-MIMO VLC systems

    NASA Astrophysics Data System (ADS)

    Fan, YangYu; Zhao, Qiong; Kang, BoChao; Deng, LiJun

    2018-01-01

    In indoor visible light communication (VLC) systems, the channels of photo detectors (PDs) at one user are highly correlated, which determines the choice of spatial diversity model for individual users. In a spatial diversity model, the signals received by PDs belonging to one user carry the same information, and can be combined directly. Based on the above, we propose an equivalent zero-forcing (ZF) precoding scheme for multiple-user multiple-input single-output (MU-MIMO) VLC systems by transforming an indoor MU-MIMO VLC system into an indoor multiple-user multiple-input single-output (MU-MISO) VLC system through simply processing. The power constraints of light emitting diodes (LEDs) are also taken into account. Comprehensive computer simulations in three scenarios indicate that our scheme can not only reduce the computational complexity, but also guarantee the system performance. Furthermore, the proposed scheme does not require noise information in the calculating of the precoding weights, and has no restrictions on the numbers of APs and PDs.

  13. An atomistic simulation scheme for modeling crystal formation from solution.

    PubMed

    Kawska, Agnieszka; Brickmann, Jürgen; Kniep, Rüdiger; Hochrein, Oliver; Zahn, Dirk

    2006-01-14

    We present an atomistic simulation scheme for investigating crystal growth from solution. Molecular-dynamics simulation studies of such processes typically suffer from considerable limitations concerning both system size and simulation times. In our method this time-length scale problem is circumvented by an iterative scheme which combines a Monte Carlo-type approach for the identification of ion adsorption sites and, after each growth step, structural optimization of the ion cluster and the solvent by means of molecular-dynamics simulation runs. An important approximation of our method is based on assuming full structural relaxation of the aggregates between each of the growth steps. This concept only holds for compounds of low solubility. To illustrate our method we studied CaF2 aggregate growth from aqueous solution, which may be taken as prototypes for compounds of very low solubility. The limitations of our simulation scheme are illustrated by the example of NaCl aggregation from aqueous solution, which corresponds to a solute/solvent combination of very high salt solubility.

  14. A novel high-speed CMOS circuit based on a gang of capacitors

    NASA Astrophysics Data System (ADS)

    Sharroush, Sherif M.

    2017-08-01

    There is no doubt that complementary metal-oxide semiconductor (CMOS) circuits with wide fan-in suffers from the relatively sluggish operation. In this paper, a circuit that contains a gang of capacitors sharing their charge with each other is proposed as an alternative to long N-channel MOS and P-channel MOS stacks. The proposed scheme is investigated quantitatively and verified by simulation using the 45-nm CMOS technology with VDD = 1 V. The time delay, area and power consumption of the proposed scheme are investigated and compared with the conventional static CMOS logic circuit. It is verified that the proposed scheme achieves 52% saving in the average propagation delay for eight inputs and that it has a smaller area compared to the conventional CMOS logic when the number of inputs exceeds three and a smaller power consumption for a number of inputs exceeding two. The impacts of process variations, component mismatches and technology scaling on the proposed scheme are also investigated.

  15. Adaptive independent joint control of manipulators - Theory and experiment

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1988-01-01

    The author presents a simple decentralized adaptive control scheme for multijoint robot manipulators based on the independent joint control concept. The proposed control scheme for each joint consists of a PID (proportional integral and differential) feedback controller and a position-velocity-acceleration feedforward controller, both with adjustable gains. The static and dynamic couplings that exist between the joint motions are compensated by the adaptive independent joint controllers while ensuring trajectory tracking. The proposed scheme is implemented on a MicroVAX II computer for motion control of the first three joints of a PUMA 560 arm. Experimental results are presented to demonstrate that trajectory tracking is achieved despite strongly coupled, highly nonlinear joint dynamics. The results confirm that the proposed decentralized adaptive control of manipulators is feasible, in spite of strong interactions between joint motions. The control scheme presented is computationally very fast and is amenable to parallel processing implementation within a distributed computing architecture, where each joint is controlled independently by a simple algorithm on a dedicated microprocessor.

  16. Using different assumptions of aerosol mixing state and chemical composition to predict CCN concentrations based on field measurements in urban Beijing

    NASA Astrophysics Data System (ADS)

    Ren, Jingye; Zhang, Fang; Wang, Yuying; Collins, Don; Fan, Xinxin; Jin, Xiaoai; Xu, Weiqi; Sun, Yele; Cribb, Maureen; Li, Zhanqing

    2018-05-01

    Understanding the impacts of aerosol chemical composition and mixing state on cloud condensation nuclei (CCN) activity in polluted areas is crucial for accurately predicting CCN number concentrations (NCCN). In this study, we predict NCCN under five assumed schemes of aerosol chemical composition and mixing state based on field measurements in Beijing during the winter of 2016. Our results show that the best closure is achieved with the assumption of size dependent chemical composition for which sulfate, nitrate, secondary organic aerosols, and aged black carbon are internally mixed with each other but externally mixed with primary organic aerosol and fresh black carbon (external-internal size-resolved, abbreviated as EI-SR scheme). The resulting ratios of predicted-to-measured NCCN (RCCN_p/m) were 0.90 - 0.98 under both clean and polluted conditions. Assumption of an internal mixture and bulk chemical composition (INT-BK scheme) shows good closure with RCCN_p/m of 1.0 -1.16 under clean conditions, implying that it is adequate for CCN prediction in continental clean regions. On polluted days, assuming the aerosol is internally mixed and has a chemical composition that is size dependent (INT-SR scheme) achieves better closure than the INT-BK scheme due to the heterogeneity and variation in particle composition at different sizes. The improved closure achieved using the EI-SR and INT-SR assumptions highlight the importance of measuring size-resolved chemical composition for CCN predictions in polluted regions. NCCN is significantly underestimated (with RCCN_p/m of 0.66 - 0.75) when using the schemes of external mixtures with bulk (EXT-BK scheme) or size-resolved composition (EXT-SR scheme), implying that primary particles experience rapid aging and physical mixing processes in urban Beijing. However, our results show that the aerosol mixing state plays a minor role in CCN prediction when the κorg exceeds 0.1.

  17. An Improved Biometrics-Based Remote User Authentication Scheme with User Anonymity

    PubMed Central

    Kumari, Saru

    2013-01-01

    The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability. PMID:24350272

  18. An improved biometrics-based remote user authentication scheme with user anonymity.

    PubMed

    Khan, Muhammad Khurram; Kumari, Saru

    2013-01-01

    The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability.

  19. Influence of surface pre-treatment on the electronic levels in silicon MaWCE nanowires.

    PubMed

    Venturi, Giulia; Castaldini, Antonio; Schleusener, Alexander; Sivakov, Vladimir; Cavallini, Anna

    2015-05-15

    Deep level transient spectroscopy (DLTS) was performed on n-doped silicon nanowires grown by metal-assisted wet chemical etching (MaWCE) with gold as the catalyst in order to investigate the energetic scheme inside the bandgap. To observe the possible dependence of the level scheme on the processing temperature, DLTS measurements were performed on the nanowires grown on a non-treated Au/Si surface and on a thermally pre-treated Au/Si surface. A noticeable modification of the configuration of the energy levels was observed, induced by the annealing process. Based on our results on these MaWCE nanowires and on literature data about deep levels in bulk silicon, some hypotheses were advanced regarding the identification of the defects responsible of the energy levels revealed.

  20. Current Saturation Avoidance with Real-Time Control using DPCS

    NASA Astrophysics Data System (ADS)

    Ferrara, M.; Hutchinson, I.; Wolfe, S.; Stillerman, J.; Fredian, T.

    2008-11-01

    Tokamak ohmic-transformer and equilibrium-field coils need to be able to operate near their maximum current capabilities. However if they reach their upper limit during high-performance discharges or in the presence of a strong off-normal event, shape control is compromised, and instability, even plasma disruptions can result. On Alcator C-Mod we designed and tested an anti-saturation routine which detects the impending saturation of OH and EF currents and interpolates to a neighboring safe equilibrium in real-time. The routine was implemented with a multi-processor, multi-time-scale control scheme, which is based on a master process and multiple asynchronous slave processes. The scheme is general and can be used for any computationally-intensive algorithm. USDoE award DE- FC02-99ER545512.

  1. Numerical investigation of shock induced bubble collapse in water

    NASA Astrophysics Data System (ADS)

    Apazidis, N.

    2016-04-01

    A semi-conservative, stable, interphase-capturing numerical scheme for shock propagation in heterogeneous systems is applied to the problem of shock propagation in liquid-gas systems. The scheme is based on the volume-fraction formulation of the equations of motion for liquid and gas phases with separate equations of state. The semi-conservative formulation of the governing equations ensures the absence of spurious pressure oscillations at the material interphases between liquid and gas. Interaction of a planar shock in water with a single spherical bubble as well as twin adjacent bubbles is investigated. Several stages of the interaction process are considered, including focusing of the transmitted shock within the deformed bubble, creation of a water-hammer shock as well as generation of high-speed liquid jet in the later stages of the process.

  2. Composability-Centered Convolutional Neural Network Pruning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Xipeng; Guan, Hui; Lim, Seung-Hwan

    This work studies the composability of the building blocks ofstructural CNN models (e.g., GoogleLeNet and Residual Networks) in thecontext of network pruning. We empirically validate that a networkcomposed of pre-trained building blocks (e.g. residual blocks andInception modules) not only gives a better initial setting fortraining, but also allows the training process to converge at asignificantly higher accuracy in much less time. Based on thatinsight, we propose a {\\em composability-centered} design for CNNnetwork pruning. Experiments show that this new scheme shortens theconfiguration process in CNN network pruning by up to 186.8X forResNet-50 and up to 30.2X for Inception-V3, and meanwhile, themore » modelsit finds that meet the accuracy requirement are significantly morecompact than those found by default schemes.« less

  3. Provably secure identity-based identification and signature schemes from code assumptions

    PubMed Central

    Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940

  4. Provably secure identity-based identification and signature schemes from code assumptions.

    PubMed

    Song, Bo; Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.

  5. RootGraph: a graphic optimization tool for automated image analysis of plant roots

    PubMed Central

    Cai, Jinhai; Zeng, Zhanghui; Connor, Jason N.; Huang, Chun Yuan; Melino, Vanessa; Kumar, Pankaj; Miklavcic, Stanley J.

    2015-01-01

    This paper outlines a numerical scheme for accurate, detailed, and high-throughput image analysis of plant roots. In contrast to existing root image analysis tools that focus on root system-average traits, a novel, fully automated and robust approach for the detailed characterization of root traits, based on a graph optimization process is presented. The scheme, firstly, distinguishes primary roots from lateral roots and, secondly, quantifies a broad spectrum of root traits for each identified primary and lateral root. Thirdly, it associates lateral roots and their properties with the specific primary root from which the laterals emerge. The performance of this approach was evaluated through comparisons with other automated and semi-automated software solutions as well as against results based on manual measurements. The comparisons and subsequent application of the algorithm to an array of experimental data demonstrate that this method outperforms existing methods in terms of accuracy, robustness, and the ability to process root images under high-throughput conditions. PMID:26224880

  6. A chaotic cryptosystem for images based on Henon and Arnold cat map.

    PubMed

    Soleymani, Ali; Nordin, Md Jan; Sundararajan, Elankovan

    2014-01-01

    The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications.

  7. Cultivation of students' engineering designing ability based on optoelectronic system course project

    NASA Astrophysics Data System (ADS)

    Cao, Danhua; Wu, Yubin; Li, Jingping

    2017-08-01

    We carry out teaching based on optoelectronic related course group, aiming at junior students majored in Optoelectronic Information Science and Engineering. " Optoelectronic System Course Project " is product-designing-oriented and lasts for a whole semester. It provides a chance for students to experience the whole process of product designing, and improve their abilities to search literature, proof schemes, design and implement their schemes. In teaching process, each project topic is carefully selected and repeatedly refined to guarantee the projects with the knowledge integrity, engineering meanings and enjoyment. Moreover, we set up a top team with professional and experienced teachers, and build up learning community. Meanwhile, the communication between students and teachers as well as the interaction among students are taken seriously in order to improve their team-work ability and communicational skills. Therefore, students are not only able to have a chance to review the knowledge hierarchy of optics, electronics, and computer sciences, but also are able to improve their engineering mindset and innovation consciousness.

  8. A Chaotic Cryptosystem for Images Based on Henon and Arnold Cat Map

    PubMed Central

    Sundararajan, Elankovan

    2014-01-01

    The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications. PMID:25258724

  9. Global conjecturing process in pattern generalization problem

    NASA Astrophysics Data System (ADS)

    Sutarto; Nusantara, Toto; Subanji; Dwi Hastuti, Intan; Dafik

    2018-04-01

    The aim of this global conjecturing process based on the theory of APOS. The subjects used in study were 15 of 8th grade students of Junior High School. The data were collected using Pattern Generalization Problem (PGP) and interviews. After students had already completed PGP; moreover, they were interviewed using students work-based to understand the conjecturing process. These interviews were video taped. The result of study reveals that the global conjecturing process occurs at the phase of action in which subjects build a conjecture by observing and counting the number of squares completely without distinguishing between black or white squares, finaly at the phase of process, the object and scheme were perfectly performed.

  10. A secure and efficient chaotic map-based authenticated key agreement scheme for telecare medicine information systems.

    PubMed

    Mishra, Dheerendra; Srinivas, Jangirala; Mukhopadhyay, Sourav

    2014-10-01

    Advancement in network technology provides new ways to utilize telecare medicine information systems (TMIS) for patient care. Although TMIS usually faces various attacks as the services are provided over the public network. Recently, Jiang et al. proposed a chaotic map-based remote user authentication scheme for TMIS. Their scheme has the merits of low cost and session key agreement using Chaos theory. It enhances the security of the system by resisting various attacks. In this paper, we analyze the security of Jiang et al.'s scheme and demonstrate that their scheme is vulnerable to denial of service attack. Moreover, we demonstrate flaws in password change phase of their scheme. Further, our aim is to propose a new chaos map-based anonymous user authentication scheme for TMIS to overcome the weaknesses of Jiang et al.'s scheme, while also retaining the original merits of their scheme. We also show that our scheme is secure against various known attacks including the attacks found in Jiang et al.'s scheme. The proposed scheme is comparable in terms of the communication and computational overheads with Jiang et al.'s scheme and other related existing schemes. Moreover, we demonstrate the validity of the proposed scheme through the BAN (Burrows, Abadi, and Needham) logic.

  11. Random numbers from vacuum fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Yicheng; Kurtsiefer, Christian, E-mail: christian.kurtsiefer@gmail.com; Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543

    2016-07-25

    We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.

  12. A Reliable TTP-Based Infrastructure with Low Sensor Resource Consumption for the Smart Home Multi-Platform

    PubMed Central

    Kang, Jungho; Kim, Mansik; Park, Jong Hyuk

    2016-01-01

    With the ICT technology making great progress in the smart home environment, the ubiquitous environment is rapidly emerging all over the world, but problems are also increasing proportionally to the rapid growth of the smart home market such as multiplatform heterogeneity and new security threats. In addition, the smart home sensors have so low computing resources that they cannot process complicated computation tasks, which is required to create a proper security environment. A service provider also faces overhead in processing data from a rapidly increasing number of sensors. This paper aimed to propose a scheme to build infrastructure in which communication entities can securely authenticate and design security channel with physically unclonable PUFs and the TTP that smart home communication entities can rely on. In addition, we analyze and evaluate the proposed scheme for security and performance and prove that it can build secure channels with low resources. Finally, we expect that the proposed scheme can be helpful for secure communication with low resources in future smart home multiplatforms. PMID:27399699

  13. Global magnetohydrodynamic simulations on multiple GPUs

    NASA Astrophysics Data System (ADS)

    Wong, Un-Hong; Wong, Hon-Cheng; Ma, Yonghui

    2014-01-01

    Global magnetohydrodynamic (MHD) models play the major role in investigating the solar wind-magnetosphere interaction. However, the huge computation requirement in global MHD simulations is also the main problem that needs to be solved. With the recent development of modern graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA), it is possible to perform global MHD simulations in a more efficient manner. In this paper, we present a global magnetohydrodynamic (MHD) simulator on multiple GPUs using CUDA 4.0 with GPUDirect 2.0. Our implementation is based on the modified leapfrog scheme, which is a combination of the leapfrog scheme and the two-step Lax-Wendroff scheme. GPUDirect 2.0 is used in our implementation to drive multiple GPUs. All data transferring and kernel processing are managed with CUDA 4.0 API instead of using MPI or OpenMP. Performance measurements are made on a multi-GPU system with eight NVIDIA Tesla M2050 (Fermi architecture) graphics cards. These measurements show that our multi-GPU implementation achieves a peak performance of 97.36 GFLOPS in double precision.

  14. A Reliable TTP-Based Infrastructure with Low Sensor Resource Consumption for the Smart Home Multi-Platform.

    PubMed

    Kang, Jungho; Kim, Mansik; Park, Jong Hyuk

    2016-07-05

    With the ICT technology making great progress in the smart home environment, the ubiquitous environment is rapidly emerging all over the world, but problems are also increasing proportionally to the rapid growth of the smart home market such as multiplatform heterogeneity and new security threats. In addition, the smart home sensors have so low computing resources that they cannot process complicated computation tasks, which is required to create a proper security environment. A service provider also faces overhead in processing data from a rapidly increasing number of sensors. This paper aimed to propose a scheme to build infrastructure in which communication entities can securely authenticate and design security channel with physically unclonable PUFs and the TTP that smart home communication entities can rely on. In addition, we analyze and evaluate the proposed scheme for security and performance and prove that it can build secure channels with low resources. Finally, we expect that the proposed scheme can be helpful for secure communication with low resources in future smart home multiplatforms.

  15. A keyword searchable attribute-based encryption scheme with attribute update for cloud storage.

    PubMed

    Wang, Shangping; Ye, Jian; Zhang, Yaling

    2018-01-01

    Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption.

  16. A keyword searchable attribute-based encryption scheme with attribute update for cloud storage

    PubMed Central

    Wang, Shangping; Zhang, Yaling

    2018-01-01

    Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption. PMID:29795577

  17. Exponential Arithmetic Based Self-Healing Group Key Distribution Scheme with Backward Secrecy under the Resource-Constrained Wireless Networks

    PubMed Central

    Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun

    2016-01-01

    In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550

  18. Design of a Vehicle Based Intervention System to Prevent Ozone Loss

    NASA Technical Reports Server (NTRS)

    Cole, Robin; Fisher, Daniel; Meade, Matt; Neel, James; Olson, Kristin; Pittman, Andrew; Valdivia, Anne; Wibisono, Aria; Mason, William H.; Kirschbaum, Nathan

    1995-01-01

    This project was designed to be completed over a period of three years. Overall project goals were: (1) To understand the processes that contribute to stratospheric ozone loss; (2) To determine the best prevention scheme for loss; (3) To design a delivery vehicle to accomplish the prevention scheme. The 1994-1995 design objectives included: (1) To review the results of the 1993-1994 design team, including a reevaluation of the major assumptions and criteria selected to choose a vehicle; (2) To evaluate preliminary vehicle concepts and perform quantitative trade studies to select the optimal vehicle concept.

  19. Characterization and optimization of an optical and electronic architecture for photon counting

    NASA Astrophysics Data System (ADS)

    Correa, M. del M.; Pérez, F. R.

    2018-04-01

    This work shows a time-domain method for the discrimination and digitization of pulses coming from optical detectors, considering the presence of electronic noise and afterpulsing. The developed signal processing scheme is based on a time-to-digital converter (TDC) and a voltage discriminator. After setting appropriate parameters for taking spectra, acquisition data was corrected by wavelength, intensity response function, and noise suppression. The performance of this scheme is discussed by its characterization as well as the comparison of its spectra to those obtained by an Ocean Optics HR4000 commercial reference.

  20. Nonreciprocal frequency conversion in a multimode microwave optomechanical circuit

    NASA Astrophysics Data System (ADS)

    Feofanov, A. K.; Bernier, N. R.; Toth, L. D.; Koottandavida, A.; Kippenberg, T. J.

    Nonreciprocal devices such as isolators, circulators, and directional amplifiers are pivotal to quantum signal processing with superconducting circuits. In the microwave domain, commercially available nonreciprocal devices are based on ferrite materials. They are barely compatible with superconducting quantum circuits, lossy, and cannot be integrated on chip. Significant potential exists for implementing non-magnetic chip-scale nonreciprocal devices using microwave optomechanical circuits. Here we demonstrate a possibility of nonreciprocal frequency conversion in a multimode microwave optomechanical circuit using solely optomechanical interaction between modes. The conversion scheme and the results reflecting the actual progress on the experimental implementation of the scheme will be presented.

  1. Market behavior and performance of different strategy evaluation schemes

    NASA Astrophysics Data System (ADS)

    Baek, Yongjoo; Lee, Sang Hoon; Jeong, Hawoong

    2010-08-01

    Strategy evaluation schemes are a crucial factor in any agent-based market model, as they determine the agents’ strategy preferences and consequently their behavioral pattern. This study investigates how the strategy evaluation schemes adopted by agents affect their performance in conjunction with the market circumstances. We observe the performance of three strategy evaluation schemes, the history-dependent wealth game, the trend-opposing minority game, and the trend-following majority game, in a stock market where the price is exogenously determined. The price is either directly adopted from the real stock market indices or generated with a Markov chain of order ≤2 . Each scheme’s success is quantified by average wealth accumulated by the traders equipped with the scheme. The wealth game, as it learns from the history, shows relatively good performance unless the market is highly unpredictable. The majority game is successful in a trendy market dominated by long periods of sustained price increase or decrease. On the other hand, the minority game is suitable for a market with persistent zigzag price patterns. We also discuss the consequence of implementing finite memory in the scoring processes of strategies. Our findings suggest under which market circumstances each evaluation scheme is appropriate for modeling the behavior of real market traders.

  2. Wavelet-based Encoding Scheme for Controlling Size of Compressed ECG Segments in Telecardiology Systems.

    PubMed

    Al-Busaidi, Asiya M; Khriji, Lazhar; Touati, Farid; Rasid, Mohd Fadlee; Mnaouer, Adel Ben

    2017-09-12

    One of the major issues in time-critical medical applications using wireless technology is the size of the payload packet, which is generally designed to be very small to improve the transmission process. Using small packets to transmit continuous ECG data is still costly. Thus, data compression is commonly used to reduce the huge amount of ECG data transmitted through telecardiology devices. In this paper, a new ECG compression scheme is introduced to ensure that the compressed ECG segments fit into the available limited payload packets, while maintaining a fixed CR to preserve the diagnostic information. The scheme automatically divides the ECG block into segments, while maintaining other compression parameters fixed. This scheme adopts discrete wavelet transform (DWT) method to decompose the ECG data, bit-field preserving (BFP) method to preserve the quality of the DWT coefficients, and a modified running-length encoding (RLE) scheme to encode the coefficients. The proposed dynamic compression scheme showed promising results with a percentage packet reduction (PR) of about 85.39% at low percentage root-mean square difference (PRD) values, less than 1%. ECG records from MIT-BIH Arrhythmia Database were used to test the proposed method. The simulation results showed promising performance that satisfies the needs of portable telecardiology systems, like the limited payload size and low power consumption.

  3. A Robust Multi-Scale Modeling System for the Study of Cloud and Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    During the past decade, numerical weather and global non-hydrostatic models have started using more complex microphysical schemes originally developed for high resolution cloud resolving models (CRMs) with 1-2 km or less horizontal resolutions. These microphysical schemes affect the dynamic through the release of latent heat (buoyancy loading and pressure gradient) the radiation through the cloud coverage (vertical distribution of cloud species), and surface processes through rainfall (both amount and intensity). Recently, several major improvements of ice microphysical processes (or schemes) have been developed for cloud-resolving model (Goddard Cumulus Ensemble, GCE, model) and regional scale (Weather Research and Forecast, WRF) model. These improvements include an improved 3-ICE (cloud ice, snow and graupel) scheme (Lang et al. 2010); a 4-ICE (cloud ice, snow, graupel and hail) scheme and a spectral bin microphysics scheme and two different two-moment microphysics schemes. The performance of these schemes has been evaluated by using observational data from TRMM and other major field campaigns. In this talk, we will present the high-resolution (1 km) GeE and WRF model simulations and compared the simulated model results with observation from recent field campaigns [i.e., midlatitude continental spring season (MC3E; 2010), high latitude cold-season (C3VP, 2007; GCPEx, 2012), and tropical oceanic (TWP-ICE, 2006)].

  4. A parallel-pipelined architecture for a multi carrier demodulator

    NASA Astrophysics Data System (ADS)

    Kwatra, S. C.; Jamali, M. M.; Eugene, Linus P.

    1991-03-01

    Analog devices have been used for processing the information on board the satellites. Presently, digital devices are being used because they are economical and flexible as compared to their analog counterparts. Several schemes of digital transmission can be used depending on the data rate requirement of the user. An economical scheme of transmission for small earth stations uses single channel per carrier/frequency division multiple access (SCPC/FDMA) on the uplink and time division multiplexing (TDM) on the downlink. This is a typical communication service offered to low data rate users in commercial mass market. These channels usually pertain to either voice or data transmission. An efficient digital demodulator architecture is provided for a large number of law data rate users. A demodulator primarily consists of carrier, clock, and data recovery modules. This design uses principles of parallel processing, pipelining, and time sharing schemes to process large numbers of voice or data channels. It maintains the optimum throughput which is derived from the designed architecture and from the use of high speed components. The design is optimized for reduced power and area requirements. This is essential for satellite applications. The design is also flexible in processing a group of a varying number of channels. The algorithms that are used are verified by the use of a computer aided software engineering (CASE) tool called the Block Oriented System Simulator. The data flow, control circuitry, and interface of the hardware design is simulated in C language. Also, a multiprocessor approach is provided to map, model, and simulate the demodulation algorithms mainly from a speed view point. A hypercude based architecture implementation is provided for such a scheme of operation. The hypercube structure and the demodulation models on hypercubes are simulated in Ada.

  5. A parallel-pipelined architecture for a multi carrier demodulator. M.S. Thesis Final Technical Report, Jan. 1989 - Aug. 1990

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Jamali, M. M.; Eugene, Linus P.

    1991-01-01

    Analog devices have been used for processing the information on board the satellites. Presently, digital devices are being used because they are economical and flexible as compared to their analog counterparts. Several schemes of digital transmission can be used depending on the data rate requirement of the user. An economical scheme of transmission for small earth stations uses single channel per carrier/frequency division multiple access (SCPC/FDMA) on the uplink and time division multiplexing (TDM) on the downlink. This is a typical communication service offered to low data rate users in commercial mass market. These channels usually pertain to either voice or data transmission. An efficient digital demodulator architecture is provided for a large number of law data rate users. A demodulator primarily consists of carrier, clock, and data recovery modules. This design uses principles of parallel processing, pipelining, and time sharing schemes to process large numbers of voice or data channels. It maintains the optimum throughput which is derived from the designed architecture and from the use of high speed components. The design is optimized for reduced power and area requirements. This is essential for satellite applications. The design is also flexible in processing a group of a varying number of channels. The algorithms that are used are verified by the use of a computer aided software engineering (CASE) tool called the Block Oriented System Simulator. The data flow, control circuitry, and interface of the hardware design is simulated in C language. Also, a multiprocessor approach is provided to map, model, and simulate the demodulation algorithms mainly from a speed view point. A hypercude based architecture implementation is provided for such a scheme of operation. The hypercube structure and the demodulation models on hypercubes are simulated in Ada.

  6. Variable disparity estimation based intermediate view reconstruction in dynamic flow allocation over EPON-based access networks

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-Hoon; Lee, Jungjoon; Kim, Eun-Soo

    2008-06-01

    In this paper, a variable disparity estimation (VDE)-based intermediate view reconstruction (IVR) in dynamic flow allocation (DFA) over an Ethernet passive optical network (EPON)-based access network is proposed. In the proposed system, the stereoscopic images are estimated by a variable block-matching algorithm (VBMA), and they are transmitted to the receiver through DFA over EPON. This scheme improves a priority-based access network by converting it to a flow-based access network with a new access mechanism and scheduling algorithm, and then 16-view images are synthesized by the IVR using VDE. Some experimental results indicate that the proposed system improves the peak-signal-to-noise ratio (PSNR) to as high as 4.86 dB and reduces the processing time to 3.52 s. Additionally, the network service provider can provide upper limits of transmission delays by the flow. The modeling and simulation results, including mathematical analyses, from this scheme are also provided.

  7. APOLLO_NG - a probabilistic interpretation of the APOLLO legacy for AVHRR heritage channels

    NASA Astrophysics Data System (ADS)

    Klüser, L.; Killius, N.; Gesell, G.

    2015-04-01

    The cloud processing scheme APOLLO (Avhrr Processing scheme Over cLouds, Land and Ocean) has been in use for cloud detection and cloud property retrieval since the late 1980s. The physics of the APOLLO scheme still build the backbone of a range of cloud detection algorithms for AVHRR (Advanced Very High Resolution Radiometer) heritage instruments. The APOLLO_NG (APOLLO_NextGeneration) cloud processing scheme is a probabilistic interpretation of the original APOLLO method. While building upon the physical principles having served well in the original APOLLO a couple of additional variables have been introduced in APOLLO_NG. Cloud detection is not performed as a binary yes/no decision based on these physical principals but is expressed as cloud probability for each satellite pixel. Consequently the outcome of the algorithm can be tuned from clear confident to cloud confident depending on the purpose. The probabilistic approach allows to retrieving not only the cloud properties (optical depth, effective radius, cloud top temperature and cloud water path) but also their uncertainties. APOLLO_NG is designed as a standalone cloud retrieval method robust enough for operational near-realtime use and for the application with large amounts of historical satellite data. Thus the radiative transfer solution is approximated by the same two stream approach which also had been used for the original APOLLO. This allows the algorithm to be robust enough for being applied to a wide range of sensors without the necessity of sensor-specific tuning. Moreover it allows for online calculation of the radiative transfer (i.e. within the retrieval algorithm) giving rise to a detailed probabilistic treatment of cloud variables. This study presents the algorithm for cloud detection and cloud property retrieval together with the physical principles from the APOLLO legacy it is based on. Furthermore a couple of example results from on NOAA-18 are presented.

  8. New adaptive color quantization method based on self-organizing maps.

    PubMed

    Chang, Chip-Hong; Xu, Pengfei; Xiao, Rui; Srikanthan, Thambipillai

    2005-01-01

    Color quantization (CQ) is an image processing task popularly used to convert true color images to palletized images for limited color display devices. To minimize the contouring artifacts introduced by the reduction of colors, a new competitive learning (CL) based scheme called the frequency sensitive self-organizing maps (FS-SOMs) is proposed to optimize the color palette design for CQ. FS-SOM harmonically blends the neighborhood adaptation of the well-known self-organizing maps (SOMs) with the neuron dependent frequency sensitive learning model, the global butterfly permutation sequence for input randomization, and the reinitialization of dead neurons to harness effective utilization of neurons. The net effect is an improvement in adaptation, a well-ordered color palette, and the alleviation of underutilization problem, which is the main cause of visually perceivable artifacts of CQ. Extensive simulations have been performed to analyze and compare the learning behavior and performance of FS-SOM against other vector quantization (VQ) algorithms. The results show that the proposed FS-SOM outperforms classical CL, Linde, Buzo, and Gray (LBG), and SOM algorithms. More importantly, FS-SOM achieves its superiority in reconstruction quality and topological ordering with a much greater robustness against variations in network parameters than the current art SOM algorithm for CQ. A most significant bit (MSB) biased encoding scheme is also introduced to reduce the number of parallel processing units. By mapping the pixel values as sign-magnitude numbers and biasing the magnitudes according to their sign bits, eight lattice points in the color space are condensed into one common point density function. Consequently, the same processing element can be used to map several color clusters and the entire FS-SOM network can be substantially scaled down without severely scarifying the quality of the displayed image. The drawback of this encoding scheme is the additional storage overhead, which can be cut down by leveraging on existing encoder in an overall lossy compression scheme.

  9. Mathematical simulation of the drying of suspensions and colloidal solutions by their depressurization

    NASA Astrophysics Data System (ADS)

    Lashkov, V. A.; Levashko, E. I.; Safin, R. G.

    2006-05-01

    The heat and mass transfer in the process of drying of high-humidity materials by their depressurization has been investigated. The results of experimental investigation and mathematical simulation of the indicated process are presented. They allow one to determine the regularities of this process and predict the quality of the finished product. A technological scheme and an engineering procedure for calculating the drying of the liquid base of a soap are presented.

  10. Sensitivity Simulation of Compressed Sensing Based Electronic Warfare Receiver Using Orthogonal Matching Pursuit Algorithm

    DTIC Science & Technology

    2016-02-01

    algorithm is used to process CS data. The insufficient nature of the sparcity of the signal adversely affects the signal detection probability for...with equal probability. The scheme was proposed [2] for image processing using single pixel camera, where the field of view was masked by a grid...modulation. The orthogonal matching pursuit (OMP) algorithm is used to process CS data. The insufficient nature of the sparcity of the signal

  11. A high-precision sampling scheme to assess persistence and transport characteristics of micropollutants in rivers.

    PubMed

    Schwientek, Marc; Guillet, Gaëlle; Rügner, Hermann; Kuch, Bertram; Grathwohl, Peter

    2016-01-01

    Increasing numbers of organic micropollutants are emitted into rivers via municipal wastewaters. Due to their persistence many pollutants pass wastewater treatment plants without substantial removal. Transport and fate of pollutants in receiving waters and export to downstream ecosystems is not well understood. In particular, a better knowledge of processes governing their environmental behavior is needed. Although a lot of data are available concerning the ubiquitous presence of micropollutants in rivers, accurate data on transport and removal rates are lacking. In this paper, a mass balance approach is presented, which is based on the Lagrangian sampling scheme, but extended to account for precise transport velocities and mixing along river stretches. The calculated mass balances allow accurate quantification of pollutants' reactivity along river segments. This is demonstrated for representative members of important groups of micropollutants, e.g. pharmaceuticals, musk fragrances, flame retardants, and pesticides. A model-aided analysis of the measured data series gives insight into the temporal dynamics of removal processes. The occurrence of different removal mechanisms such as photooxidation, microbial degradation, and volatilization is discussed. The results demonstrate, that removal processes are highly variable in time and space and this has to be considered for future studies. The high precision sampling scheme presented could be a powerful tool for quantifying removal processes under different boundary conditions and in river segments with contrasting properties. Copyright © 2015. Published by Elsevier B.V.

  12. Stakeholders Perspectives on the Success Drivers in Ghana’s National Health Insurance Scheme – Identifying Policy Translation Issues

    PubMed Central

    Fusheini, Adam; Marnoch, Gordon; Gray, Ann Marie

    2017-01-01

    Background: Ghana’s National Health Insurance Scheme (NHIS), established by an Act of Parliament (Act 650), in 2003 and since replaced by Act 852 of 2012 remains, in African terms, unprecedented in terms of growth and coverage. As a result, the scheme has received praise for its associated legal reforms, clinical audit mechanisms and for serving as a hub for knowledge sharing and learning within the context of South-South cooperation. The scheme continues to shape national health insurance thinking in Africa. While the success, especially in coverage and financial access has been highlighted by many authors, insufficient attention has been paid to critical and context-specific factors. This paper seeks to fill that gap. Methods: Based on an empirical qualitative case study of stakeholders’ views on challenges and success factors in four mutual schemes (district offices) located in two regions of Ghana, the study uses the concept of policy translation to assess whether the Ghana scheme could provide useful lessons to other African and developing countries in their quest to implement social/NHISs. Results: In the study, interviewees referred to both ‘hard and soft’ elements as driving the "success" of the Ghana scheme. The main ‘hard elements’ include bureaucratic and legal enforcement capacities; IT; financing; governance, administration and management; regulating membership of the scheme; and service provision and coverage capabilities. The ‘soft’ elements identified relate to: the background/context of the health insurance scheme; innovative ways of funding the NHIS, the hybrid nature of the Ghana scheme; political will, commitment by government, stakeholders and public cooperation; social structure of Ghana (solidarity); and ownership and participation. Conclusion: Other developing countries can expect to translate rather than re-assemble a national health insurance programme in an incomplete and highly modified form over a period of years, amounting to a process best conceived as germination as opposed to emulation. The Ghana experience illustrates that in adopting health financing systems that function well, countries need to customise systems (policy customisation) to suit their socio-economic, political and administrative settings. Home-grown health financing systems that resonate with social values will also need to be found in the process of translation PMID:28812815

  13. OML: optical maskless lithography for economic design prototyping and small-volume production

    NASA Astrophysics Data System (ADS)

    Sandstrom, Tor; Bleeker, Arno; Hintersteiner, Jason; Troost, Kars; Freyer, Jorge; van der Mast, Karel

    2004-05-01

    The business case for Maskless Lithography is more compelling than ever before, due to more critical processes, rising mask costs and shorter product cycles. The economics of Maskless Lithography gives a crossover volume from Maskless to mask-based lithography at surprisingly many wafers per mask for surprisingly few wafers per hour throughput. Also, small-volume production will in many cases be more economical with Maskless Lithography, even when compared to "shuttle" schemes, reticles with multiple layers, etc. The full benefit of Maskless Lithography is only achievable by duplicating processes that are compatible with volume production processes on conventional scanners. This can be accomplished by the integration of pattern generators based on spatial light modulator technology with state-of-the-art optical scanner systems. This paper reports on the system design of an Optical Maskless Scanner in development by ASML and Micronic: small-field optics with high demagnification, variable NA and illumination schemes, spatial light modulators with millions of MEMS mirrors on CMOS drivers, a data path with a sustained data flow of more than 250 GPixels per second, stitching of sub-fields to scanner fields, and rasterization and writing strategies for throughput and good image fidelity. Predicted lithographic performance based on image simulations is also shown.

  14. Prosodic expectations in silent reading: ERP evidence from rhyme scheme and semantic congruence in classic Chinese poems.

    PubMed

    Chen, Qingrong; Zhang, Jingjing; Xu, Xiaodong; Scheepers, Christoph; Yang, Yiming; Tanenhaus, Michael K

    2016-09-01

    In an ERP study, classic Chinese poems with a well-known rhyme scheme were used to generate an expectation of a rhyme in the absence of an expectation for a specific character. Critical characters were either consistent or inconsistent with the expected rhyme scheme and semantically congruent or incongruent with the content of the poem. These stimuli allowed us to examine whether a top-down rhyme scheme expectation would affect relatively early components of the ERP associated with character-to-sound mapping (P200) and lexically-mediated semantic processing (N400). The ERP data revealed that rhyme scheme congruence, but not semantic congruence modulated the P200: rhyme-incongruent characters elicited a P200 effect across the head demonstrating that top-down expectations influence early phonological coding of the character before lexical-semantic processing. Rhyme scheme incongruence also produced a right-lateralized N400-like effect. Moreover, compared to semantically congruous poems, semantically incongruous poems produced a larger N400 response only when the character was consistent with the expected rhyme scheme. The results suggest that top-down prosodic expectations can modulate early phonological processing in visual word recognition, indicating that prosodic expectations might play an important role in silent reading. They also suggest that semantic processing is influenced by general knowledge of text genre. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Train integrity detection risk analysis based on PRISM

    NASA Astrophysics Data System (ADS)

    Wen, Yuan

    2018-04-01

    GNSS based Train Integrity Monitoring System (TIMS) is an effective and low-cost detection scheme for train integrity detection. However, as an external auxiliary system of CTCS, GNSS may be influenced by external environments, such as uncertainty of wireless communication channels, which may lead to the failure of communication and positioning. In order to guarantee the reliability and safety of train operation, a risk analysis method of train integrity detection based on PRISM is proposed in this article. First, we analyze the risk factors (in GNSS communication process and the on-board communication process) and model them. Then, we evaluate the performance of the model in PRISM based on the field data. Finally, we discuss how these risk factors influence the train integrity detection process.

  16. Intelligent design of permanent magnet synchronous motor based on CBR

    NASA Astrophysics Data System (ADS)

    Li, Cong; Fan, Beibei

    2018-05-01

    Aiming at many problems in the design process of Permanent magnet synchronous motor (PMSM), such as the complexity of design process, the over reliance on designers' experience and the lack of accumulation and inheritance of design knowledge, a design method of PMSM Based on CBR is proposed in order to solve those problems. In this paper, case-based reasoning (CBR) methods of cases similarity calculation is proposed for reasoning suitable initial scheme. This method could help designers, by referencing previous design cases, to make a conceptual PMSM solution quickly. The case retain process gives the system self-enrich function which will improve the design ability of the system with the continuous use of the system.

  17. The parallel algorithm for the 2D discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Barina, David; Najman, Pavel; Kleparnik, Petr; Kula, Michal; Zemcik, Pavel

    2018-04-01

    The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors.

  18. Quantum Stabilizer Codes Can Realize Access Structures Impossible by Classical Secret Sharing

    NASA Astrophysics Data System (ADS)

    Matsumoto, Ryutaroh

    We show a simple example of a secret sharing scheme encoding classical secret to quantum shares that can realize an access structure impossible by classical information processing with limitation on the size of each share. The example is based on quantum stabilizer codes.

  19. Application of Intel Many Integrated Core (MIC) architecture to the Yonsei University planetary boundary layer scheme in Weather Research and Forecasting model

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Weather Research and Forecasting (WRF) model provided operational services worldwide in many areas and has linked to our daily activity, in particular during severe weather events. The scheme of Yonsei University (YSU) is one of planetary boundary layer (PBL) models in WRF. The PBL is responsible for vertical sub-grid-scale fluxes due to eddy transports in the whole atmospheric column, determines the flux profiles within the well-mixed boundary layer and the stable layer, and thus provide atmospheric tendencies of temperature, moisture (including clouds), and horizontal momentum in the entire atmospheric column. The YSU scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. To accelerate the computation process of the YSU scheme, we employ Intel Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization improved the performance of the first version of multi-threaded code on Xeon Phi 5110P by a factor of 2.4x. Furthermore, the same CPU-based optimizations improved the performance on Intel Xeon E5-2603 by a factor of 1.6x as compared to the first version of multi-threaded code.

  20. An improved biometrics-based authentication scheme for telecare medical information systems.

    PubMed

    Guo, Dianli; Wen, Qiaoyan; Li, Wenmin; Zhang, Hua; Jin, Zhengping

    2015-03-01

    Telecare medical information system (TMIS) offers healthcare delivery services and patients can acquire their desired medical services conveniently through public networks. The protection of patients' privacy and data confidentiality are significant. Very recently, Mishra et al. proposed a biometrics-based authentication scheme for telecare medical information system. Their scheme can protect user privacy and is believed to resist a range of network attacks. In this paper, we analyze Mishra et al.'s scheme and identify that their scheme is insecure to against known session key attack and impersonation attack. Thereby, we present a modified biometrics-based authentication scheme for TMIS to eliminate the aforementioned faults. Besides, we demonstrate the completeness of the proposed scheme through BAN-logic. Compared to the related schemes, our protocol can provide stronger security and it is more practical.

Top