Sample records for simple scheme based

  1. Simple scheme to implement decoy-state reference-frame-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Chunmei; Zhu, Jianrong; Wang, Qin

    2018-06-01

    We propose a simple scheme to implement decoy-state reference-frame-independent quantum key distribution (RFI-QKD), where signal states are prepared in Z, X, and Y bases, decoy states are prepared in X and Y bases, and vacuum states are set to no bases. Different from the original decoy-state RFI-QKD scheme whose decoy states are prepared in Z, X and Y bases, in our scheme decoy states are only prepared in X and Y bases, which avoids the redundancy of decoy states in Z basis, saves the random number consumption, simplifies the encoding device of practical RFI-QKD systems, and makes the most of the finite pulses in a short time. Numerical simulations show that, considering the finite size effect with reasonable number of pulses in practical scenarios, our simple decoy-state RFI-QKD scheme exhibits at least comparable or even better performance than that of the original decoy-state RFI-QKD scheme. Especially, in terms of the resistance to the relative rotation of reference frames, our proposed scheme behaves much better than the original scheme, which has great potential to be adopted in current QKD systems.

  2. A simple recipe for setting up the flux equations of cyclic and linear reaction schemes of ion transport with a high number of states: The arrow scheme.

    PubMed

    Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra

    2016-01-01

    The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases.

  3. A simple recipe for setting up the flux equations of cyclic and linear reaction schemes of ion transport with a high number of states: The arrow scheme

    PubMed Central

    Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra

    2016-01-01

    abstract The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases. PMID:26646356

  4. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    PubMed

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. A simple photoionization scheme for characterizing electron and ion spectrometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wituschek, A.; Vangerow, J. von; Grzesiak, J.

    We present a simple diode laser-based photoionization scheme for generating electrons and ions with well-defined spatial and energetic (≲2 eV) structures. This scheme can easily be implemented in ion or electron imaging spectrometers for the purpose of off-line characterization and calibration. The low laser power ∼1 mW needed from a passively stabilized diode laser and the low flux of potassium atoms in an effusive beam make our scheme a versatile source of ions and electrons for applications in research and education.

  6. Control of parallel manipulators using force feedback

    NASA Technical Reports Server (NTRS)

    Nanua, Prabjot

    1994-01-01

    Two control schemes are compared for parallel robotic mechanisms actuated by hydraulic cylinders. One scheme, the 'rate based scheme', uses the position and rate information only for feedback. The second scheme, the 'force based scheme' feeds back the force information also. The force control scheme is shown to improve the response over the rate control one. It is a simple constant gain control scheme better suited to parallel mechanisms. The force control scheme can be easily modified for the dynamic forces on the end effector. This paper presents the results of a computer simulation of both the rate and force control schemes. The gains in the force based scheme can be individually adjusted in all three directions, whereas the adjustment in just one direction of the rate based scheme directly affects the other two directions.

  7. New coherent laser communication detection scheme based on channel-switching method.

    PubMed

    Liu, Fuchuan; Sun, Jianfeng; Ma, Xiaoping; Hou, Peipei; Cai, Guangyu; Sun, Zhiwei; Lu, Zhiyong; Liu, Liren

    2015-04-01

    A new coherent laser communication detection scheme based on the channel-switching method is proposed. The detection front end of this scheme comprises a 90° optical hybrid and two balanced photodetectors which outputs the in-phase (I) channel and quadrature-phase (Q) channel signal current, respectively. With this method, the ultrahigh speed analog/digital transform of the signal of the I or Q channel is not required. The phase error between the signal and local lasers is obtained by simple analog circuit. Using the phase error signal, the signals of the I/Q channel are switched alternately. The principle of this detection scheme is presented. Moreover, the comparison of the sensitivity of this scheme with that of homodyne detection with an optical phase-locked loop is discussed. An experimental setup was constructed to verify the proposed detection scheme. The offline processing procedure and results are presented. This scheme could be realized through simple structure and has potential applications in cost-effective high-speed laser communication.

  8. A Novel DFT-Based DOA Estimation by a Virtual Array Extension Using Simple Multiplications for FMCW Radar

    PubMed Central

    Kim, Bongseok; Kim, Sangdong; Lee, Jonghun

    2018-01-01

    We propose a novel discrete Fourier transform (DFT)-based direction of arrival (DOA) estimation by a virtual array extension using simple multiplications for frequency modulated continuous wave (FMCW) radar. DFT-based DOA estimation is usually employed in radar systems because it provides the advantage of low complexity for real-time signal processing. In order to enhance the resolution of DOA estimation or to decrease the missing detection probability, it is essential to have a considerable number of channel signals. However, due to constraints of space and cost, it is not easy to increase the number of channel signals. In order to address this issue, we increase the number of effective channel signals by generating virtual channel signals using simple multiplications of the given channel signals. The increase in channel signals allows the proposed scheme to detect DOA more accurately than the conventional scheme while using the same number of channel signals. Simulation results show that the proposed scheme achieves improved DOA estimation compared to the conventional DFT-based method. Furthermore, the effectiveness of the proposed scheme in a practical environment is verified through the experiment. PMID:29758016

  9. Development of a new flux splitting scheme

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1991-01-01

    The use of a new splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.

  10. Development of a new flux splitting scheme

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1991-01-01

    The successful use of a novel splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.

  11. Exact density functional and wave function embedding schemes based on orbital localization

    NASA Astrophysics Data System (ADS)

    Hégely, Bence; Nagy, Péter R.; Ferenczy, György G.; Kállay, Mihály

    2016-08-01

    Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.

  12. LSB-Based Steganography Using Reflected Gray Code

    NASA Astrophysics Data System (ADS)

    Chen, Chang-Chu; Chang, Chin-Chen

    Steganography aims to hide secret data into an innocuous cover-medium for transmission and to make the attacker cannot recognize the presence of secret data easily. Even the stego-medium is captured by the eavesdropper, the slight distortion is hard to be detected. The LSB-based data hiding is one of the steganographic methods, used to embed the secret data into the least significant bits of the pixel values in a cover image. In this paper, we propose an LSB-based scheme using reflected-Gray code, which can be applied to determine the embedded bit from secret information. Following the transforming rule, the LSBs of stego-image are not always equal to the secret bits and the experiment shows that the differences are up to almost 50%. According to the mathematical deduction and experimental results, the proposed scheme has the same image quality and payload as the simple LSB substitution scheme. In fact, our proposed data hiding scheme in the case of G1 (one bit Gray code) system is equivalent to the simple LSB substitution scheme.

  13. A new semiclassical decoupling scheme for electronic transitions in molecular collisions - Application to vibrational-to-electronic energy transfer

    NASA Technical Reports Server (NTRS)

    Lee, H.-W.; Lam, K. S.; Devries, P. L.; George, T. F.

    1980-01-01

    A new semiclassical decoupling scheme (the trajectory-based decoupling scheme) is introduced in a computational study of vibrational-to-electronic energy transfer for a simple model system that simulates collinear atom-diatom collisions. The probability of energy transfer (P) is calculated quasiclassically using the new scheme as well as quantum mechanically as a function of the atomic electronic-energy separation (lambda), with overall good agreement between the two sets of results. Classical mechanics with the new decoupling scheme is found to be capable of predicting resonance behavior whereas an earlier decoupling scheme (the coordinate-based decoupling scheme) failed. Interference effects are not exhibited in P vs lambda results.

  14. Triangle based TVD schemes for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Durlofsky, Louis J.; Osher, Stanley; Engquist, Bjorn

    1990-01-01

    A triangle based total variation diminishing (TVD) scheme for the numerical approximation of hyperbolic conservation laws in two space dimensions is constructed. The novelty of the scheme lies in the nature of the preprocessing of the cell averaged data, which is accomplished via a nearest neighbor linear interpolation followed by a slope limiting procedures. Two such limiting procedures are suggested. The resulting method is considerably more simple than other triangle based non-oscillatory approximations which, like this scheme, approximate the flux up to second order accuracy. Numerical results for linear advection and Burgers' equation are presented.

  15. Simple and robust image-based autofocusing for digital microscopy.

    PubMed

    Yazdanfar, Siavash; Kenny, Kevin B; Tasimi, Krenar; Corwin, Alex D; Dixon, Elizabeth L; Filkins, Robert J

    2008-06-09

    A simple image-based autofocusing scheme for digital microscopy is demonstrated that uses as few as two intermediate images to bring the sample into focus. The algorithm is adapted to a commercial inverted microscope and used to automate brightfield and fluorescence imaging of histopathology tissue sections.

  16. Compiler-directed cache management in multiprocessors

    NASA Technical Reports Server (NTRS)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  17. Maximized gust loads for a nonlinear airplane using matched filter theory and constrained optimization

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1991-01-01

    Two matched filter theory based schemes are described and illustrated for obtaining maximized and time correlated gust loads for a nonlinear aircraft. The first scheme is computationally fast because it uses a simple 1-D search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multi-dimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

  18. Maximized gust loads for a nonlinear airplane using matched filter theory and constrained optimization

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Perry, Boyd, III; Pototzky, Anthony S.

    1991-01-01

    This paper describes and illustrates two matched-filter-theory based schemes for obtaining maximized and time-correlated gust-loads for a nonlinear airplane. The first scheme is computationally fast because it uses a simple one-dimensional search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multidimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

  19. Self-match based on polling scheme for passive optical network monitoring

    NASA Astrophysics Data System (ADS)

    Zhang, Xuan; Guo, Hao; Jia, Xinhong; Liao, Qinghua

    2018-06-01

    We propose a self-match based on polling scheme for passive optical network monitoring. Each end-user is equipped with an optical matcher that exploits only the specific length patchcord and two different fiber Bragg gratings with 100% reflectivity. The simple and low-cost scheme can greatly simplify the final recognition processing of the network link status and reduce the sensitivity of the photodetector. We analyze the time-domain relation between reflected pulses and establish the calculation model to evaluate the false alarm rate. The feasibility of the proposed scheme and the validity of the time-domain relation analysis are experimentally demonstrated.

  20. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    PubMed Central

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  1. A new Euler scheme based on harmonic-polygon approach for solving first order ordinary differential equation

    NASA Astrophysics Data System (ADS)

    Yusop, Nurhafizah Moziyana Mohd; Hasan, Mohammad Khatim; Wook, Muslihah; Amran, Mohd Fahmi Mohamad; Ahmad, Siti Rohaidah

    2017-10-01

    There are many benefits to improve Euler scheme for solving the Ordinary Differential Equation Problems. Among the benefits are simple implementation and low-cost computational. However, the problem of accuracy in Euler scheme persuade scholar to use complex method. Therefore, the main purpose of this research are show the construction a new modified Euler scheme that improve accuracy of Polygon scheme in various step size. The implementing of new scheme are used Polygon scheme and Harmonic mean concept that called as Harmonic-Polygon scheme. This Harmonic-Polygon can provide new advantages that Euler scheme could offer by solving Ordinary Differential Equation problem. Four set of problems are solved via Harmonic-Polygon. Findings show that new scheme or Harmonic-Polygon scheme can produce much better accuracy result.

  2. Improved numerical methods for turbulent viscous flows aerothermal modeling program, phase 2

    NASA Technical Reports Server (NTRS)

    Karki, K. C.; Patankar, S. V.; Runchal, A. K.; Mongia, H. C.

    1988-01-01

    The details of a study to develop accurate and efficient numerical schemes to predict complex flows are described. In this program, several discretization schemes were evaluated using simple test cases. This assessment led to the selection of three schemes for an in-depth evaluation based on two-dimensional flows. The scheme with the superior overall performance was incorporated in a computer program for three-dimensional flows. To improve the computational efficiency, the selected discretization scheme was combined with a direct solution approach in which the fluid flow equations are solved simultaneously rather than sequentially.

  3. Don’t make cache too complex: A simple probability-based cache management scheme for SSDs

    PubMed Central

    Cho, Sangyeun; Choi, Jongmoo

    2017-01-01

    Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme. PMID:28358897

  4. Don't make cache too complex: A simple probability-based cache management scheme for SSDs.

    PubMed

    Baek, Seungjae; Cho, Sangyeun; Choi, Jongmoo

    2017-01-01

    Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hégely, Bence; Nagy, Péter R.; Kállay, Mihály, E-mail: kallay@mail.bme.hu

    Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up themore » system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.« less

  6. Geographic Information Systems: A Primer

    DTIC Science & Technology

    1990-10-01

    AVAILABILITY OF REPORT Approved for public release; distribution 2b DECLASSjFICATION/ DOWNGRADING SCHEDULE unlimited. 4 PERFORMING ORGANIZATION REPORT...utilizing sophisticated integrated databases (usually vector-based), avoid the indirect value coding scheme by recognizing names or direct magnitudes...intricate involvement required by the operator in order to establish a functional coding scheme . A simple raster system, in which cell values indicate

  7. HiPS - Hierarchical Progressive Survey Version 1.0

    NASA Astrophysics Data System (ADS)

    Fernique, Pierre; Allen, Mark; Boch, Thomas; Donaldson, Tom; Durand, Daniel; Ebisawa, Ken; Michel, Laurent; Salgado, Jesus; Stoehr, Felix; Fernique, Pierre

    2017-05-01

    This document presents HiPS, a hierarchical scheme for the description, storage and access of sky survey data. The system is based on hierarchical tiling of sky regions at finer and finer spatial resolution which facilitates a progressive view of a survey, and supports multi-resolution zooming and panning. HiPS uses the HEALPix tessellation of the sky as the basis for the scheme and is implemented as a simple file structure with a direct indexing scheme that leads to practical implementations.

  8. Complex versus simple models: ion-channel cardiac toxicity prediction.

    PubMed

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  9. On-Line Method and Apparatus for Coordinated Mobility and Manipulation of Mobile Robots

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1996-01-01

    A simple and computationally efficient approach is disclosed for on-line coordinated control of mobile robots consisting of a manipulator arm mounted on a mobile base. The effect of base mobility on the end-effector manipulability index is discussed. The base mobility and arm manipulation degrees-of-freedom are treated equally as the joints of a kinematically redundant composite robot. The redundancy introduced by the mobile base is exploited to satisfy a set of user-defined additional tasks during the end-effector motion. A simple on-line control scheme is proposed which allows the user to assign weighting factors to individual degrees-of-mobility and degrees-of-manipulation, as well as to each task specification. The computational efficiency of the control algorithm makes it particularly suitable for real-time implementations. Four case studies are discussed in detail to demonstrate the application of the coordinated control scheme to various mobile robots.

  10. Design of the Detector II: A CMOS Gate Array for the Study of Concurrent Error Detection Techniques.

    DTIC Science & Technology

    1987-07-01

    detection schemes and temporary failures. The circuit consists- or of six different adders with concurrent error detection schemes . The error detection... schemes are - simple duplication, duplication with functional dual implementation, duplication with different &I [] .6implementations, two-rail encoding...THE SYSTEM. .. .... ...... ...... ...... 5 7. DESIGN OF CED SCHEMES .. ... ...... ...... ........ 7 7.1 Simple Duplication

  11. Adaptive independent joint control of manipulators - Theory and experiment

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1988-01-01

    The author presents a simple decentralized adaptive control scheme for multijoint robot manipulators based on the independent joint control concept. The proposed control scheme for each joint consists of a PID (proportional integral and differential) feedback controller and a position-velocity-acceleration feedforward controller, both with adjustable gains. The static and dynamic couplings that exist between the joint motions are compensated by the adaptive independent joint controllers while ensuring trajectory tracking. The proposed scheme is implemented on a MicroVAX II computer for motion control of the first three joints of a PUMA 560 arm. Experimental results are presented to demonstrate that trajectory tracking is achieved despite strongly coupled, highly nonlinear joint dynamics. The results confirm that the proposed decentralized adaptive control of manipulators is feasible, in spite of strong interactions between joint motions. The control scheme presented is computationally very fast and is amenable to parallel processing implementation within a distributed computing architecture, where each joint is controlled independently by a simple algorithm on a dedicated microprocessor.

  12. High-order UWB pulses scheme to generate multilevel modulation formats based on incoherent optical sources.

    PubMed

    Bolea, Mario; Mora, José; Ortega, Beatriz; Capmany, José

    2013-11-18

    We present a high-order UWB pulses generator based on a microwave photonic filter which provides a set of positive and negative samples by using the slicing of an incoherent optical source and the phase inversion in a Mach-Zehnder modulator. The simple scalability and high reconfigurability of the system permit a better accomplishment of the FCC requirements. Moreover, the proposed scheme permits an easy adaptation to pulse amplitude modulation, bi phase modulation, pulse shape modulation and pulse position modulation. The flexibility of the scheme for being adaptable to multilevel modulation formats permits to increase the transmission bit rate by using hybrid modulation formats.

  13. SYSTEMATIZATION OF MASS LEVELS OF PARTICLES AND RESONANCES ON HEURISTIC BASIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takabayasi, T.

    1963-12-16

    Once more a scheme of simple mass rules and formulas for particles and resonant levels is investigated and organized, based on some general hypotheses. The essential ingredients in the scheme are, on one hand, the equalinterval rule governing the isosinglet meson series, associated with particularly simple mass ratio between the 2/sup ++/ level f and 0/sup ++/ level ABC, and on the other a new basic mass formula that unifies some of the meson and baryon levels. The whole baryon levels are arranged in a table analogous to the periodic table, and then correspondences between different series and equivalence betweenmore » spin and hypercharge, when properly applied, just fix the whole baryon mass spectrum in good agreement with observations. Connections with the scheme of mass formulas formerly given are also shown. (auth)« less

  14. A numerical study of the axisymmetric Couette-Taylor problem using a fast high-resolution second-order central scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kupferman, R.

    The author presents a numerical study of the axisymmetric Couette-Taylor problem using a finite difference scheme. The scheme is based on a staggered version of a second-order central-differencing method combined with a discrete Hodge projection. The use of central-differencing operators obviates the need to trace the characteristic flow associated with the hyperbolic terms. The result is a simple and efficient scheme which is readily adaptable to other geometries and to more complicated flows. The scheme exhibits competitive performance in terms of accuracy, resolution, and robustness. The numerical results agree accurately with linear stability theory and with previous numerical studies.

  15. High-order asynchrony-tolerant finite difference schemes for partial differential equations

    NASA Astrophysics Data System (ADS)

    Aditya, Konduri; Donzis, Diego A.

    2017-12-01

    Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.

  16. Simple Analytic Formula for the Period of the Nonlinear Pendulum via the Struve Function: Connection to Acoustical Impedance Matching

    ERIC Educational Resources Information Center

    Douvropoulos, Theodosios G.

    2012-01-01

    An approximate formula for the period of pendulum motion beyond the small amplitude regime is obtained based on physical arguments. Two different schemes of different accuracy are developed: in the first less accurate scheme, emphasis is given on the non-quadratic form of the potential in connection to isochronism, and a specific form of a generic…

  17. A second-order shock-adaptive Godunov scheme based on the generalized Lagrangian formulation

    NASA Astrophysics Data System (ADS)

    Lepage, Claude

    Application of the Godunov scheme to the Euler equations of gas dynamics, based on the Eulerian formulation of flow, smears discontinuities (especially sliplines) over several computational cells, while the accuracy in the smooth flow regions is of the order of a function of the cell width. Based on the generalized Lagrangian formulation (GLF), the Godunov scheme yields far superior results. By the use of coordinate streamlines in the GLF, the slipline (itself a streamline) is resolved crisply. Infinite shock resolution is achieved through the splitting of shock cells, while the accuracy in the smooth flow regions is improved using a nonconservative formulation of the governing equations coupled to a second order extension of the Godunov scheme. Furthermore, GLF requires no grid generation for boundary value problems and the simple structure of the solution to the Riemann problem in the GLF is exploited in the numerical implementation of the shock adaptive scheme. Numerical experiments reveal high efficiency and unprecedented resolution of shock and slipline discontinuities.

  18. An advanced temporal credential-based security scheme with mutual authentication and key agreement for wireless sensor networks.

    PubMed

    Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi

    2013-07-24

    Wireless sensor networks (WSNs) can be quickly and randomly deployed in any harsh and unattended environment and only authorized users are allowed to access reliable sensor nodes in WSNs with the aid of gateways (GWNs). Secure authentication models among the users, the sensor nodes and GWN are important research issues for ensuring communication security and data privacy in WSNs. In 2013, Xue et al. proposed a temporal-credential-based mutual authentication and key agreement scheme for WSNs. However, in this paper, we point out that Xue et al.'s scheme cannot resist stolen-verifier, insider, off-line password guessing, smart card lost problem and many logged-in users' attacks and these security weaknesses make the scheme inapplicable to practical WSN applications. To tackle these problems, we suggest a simple countermeasure to prevent proposed attacks while the other merits of Xue et al.'s authentication scheme are left unchanged.

  19. An Advanced Temporal Credential-Based Security Scheme with Mutual Authentication and Key Agreement for Wireless Sensor Networks

    PubMed Central

    Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi

    2013-01-01

    Wireless sensor networks (WSNs) can be quickly and randomly deployed in any harsh and unattended environment and only authorized users are allowed to access reliable sensor nodes in WSNs with the aid of gateways (GWNs). Secure authentication models among the users, the sensor nodes and GWN are important research issues for ensuring communication security and data privacy in WSNs. In 2013, Xue et al. proposed a temporal-credential-based mutual authentication and key agreement scheme for WSNs. However, in this paper, we point out that Xue et al.'s scheme cannot resist stolen-verifier, insider, off-line password guessing, smart card lost problem and many logged-in users' attacks and these security weaknesses make the scheme inapplicable to practical WSN applications. To tackle these problems, we suggest a simple countermeasure to prevent proposed attacks while the other merits of Xue et al.'s authentication scheme are left unchanged. PMID:23887085

  20. The Battlefield Environment Division Modeling Framework (BMF). Part 1: Optimizing the Atmospheric Boundary Layer Environment Model for Cluster Computing

    DTIC Science & Technology

    2014-02-01

    idle waiting for the wavefront to reach it. To overcome this, Reeve et al. (2001) 3 developed a scheme in analogy to the red-black Gauss - Seidel iterative ...understandable procedure calls. Parallelization of the SIMPLE iterative scheme with SIP used a red-black scheme similar to the red-black Gauss - Seidel ...scheme, the SIMPLE method, for pressure-velocity coupling. The result is a slowing convergence of the outer iterations . The red-black scheme excites a 2

  1. Alternating Direction Implicit (ADI) schemes for a PDE-based image osmosis model

    NASA Astrophysics Data System (ADS)

    Calatroni, L.; Estatico, C.; Garibaldi, N.; Parisotto, S.

    2017-10-01

    We consider Alternating Direction Implicit (ADI) splitting schemes to compute efficiently the numerical solution of the PDE osmosis model considered by Weickert et al. in [10] for several imaging applications. The discretised scheme is shown to preserve analogous properties to the continuous model. The dimensional splitting strategy traduces numerically into the solution of simple tridiagonal systems for which standard matrix factorisation techniques can be used to improve upon the performance of classical implicit methods, even for large time steps. Applications to the shadow removal problem are presented.

  2. Some implementational issues of convection schemes for finite volume formulations

    NASA Technical Reports Server (NTRS)

    Thakur, Siddharth; Shyy, Wei

    1993-01-01

    Two higher-order upwind schemes - second-order upwind and QUICK - are examined in terms of their interpretation, implementation as well as performance for a recirculating flow in a lid-driven cavity, in the context of a control volume formulation using the SIMPLE algorithm. The present formulation of these schemes is based on a unified framework wherein the first-order upwind scheme is chosen as the basis, with the remaining terms being assigned to the source term. The performance of these schemes is contrasted with the first-order upwind and second-order central difference schemes. Also addressed in this study is the issue of boundary treatment associated with these higher-order upwind schemes. Two different boundary treatments - one that uses a two-point scheme consistently within a given control volume at the boundary, and the other that maintains consistency of flux across the interior face between the adjacent control volumes - are formulated and evaluated.

  3. Some implementational issues of convection schemes for finite-volume formulations

    NASA Technical Reports Server (NTRS)

    Thakur, Siddharth; Shyy, Wei

    1993-01-01

    Two higher-order upwind schemes - second-order upwind and QUICK - are examined in terms of their interpretation, implementations, as well as performance for a recirculating flow in a lid-driven cavity, in the context of a control-volume formulation using the SIMPLE algorithm. The present formulation of these schemes is based on a unified framework wherein the first-order upwind scheme is chosen as the basis, with the remaining terms being assigned to the source term. The performance of these schemes is contrasted with the first-order upwind and second-order central difference schemes. Also addressed in this study is the issue of boundary treatment associated with these higher-order upwind schemes. Two different boundary treatments - one that uses a two-point scheme consistently within a given control volume at the boundary, and the other that maintains consistency of flux across the interior face between the adjacent control volumes - are formulated and evaluated.

  4. Simple pre-distortion schemes for improving the power efficiency of SOA-based IR-UWB over fiber systems

    NASA Astrophysics Data System (ADS)

    Taki, H.; Azou, S.; Hamie, A.; Al Housseini, A.; Alaeddine, A.; Sharaiha, A.

    2017-01-01

    In this paper, we investigate the usage of SOA for reach extension of an impulse radio over fiber system. Operating in the saturated regime translates into strong nonlinearities and spectral distortions, which drops the power efficiency of the propagated pulses. After studying the SOA response versus operating conditions, we have enhanced the system performance by applying simple analog pre-distortion schemes for various derivatives of the Gaussian pulse and their combination. A novel pulse shape has also been designed by linearly combining three basic Gaussian pulses, offering a very good spectral efficiency (> 55 %) for a high power (0 dBm) at the amplifier input. Furthermore, the potential of our technique has been examined considering a 1.5 Gbps-OOK and 0.75 Gbps-PPM modulation schemes. Pre-distortion proved an advantage for a large extension of optical link (150 km), with an inline amplification via SOA at 40 km.

  5. A Simple Secure Hash Function Scheme Using Multiple Chaotic Maps

    NASA Astrophysics Data System (ADS)

    Ahmad, Musheer; Khurana, Shruti; Singh, Sushmita; AlSharari, Hamed D.

    2017-06-01

    The chaotic maps posses high parameter sensitivity, random-like behavior and one-way computations, which favor the construction of cryptographic hash functions. In this paper, we propose to present a novel hash function scheme which uses multiple chaotic maps to generate efficient variable-sized hash functions. The message is divided into four parts, each part is processed by a different 1D chaotic map unit yielding intermediate hash code. The four codes are concatenated to two blocks, then each block is processed through 2D chaotic map unit separately. The final hash value is generated by combining the two partial hash codes. The simulation analyses such as distribution of hashes, statistical properties of confusion and diffusion, message and key sensitivity, collision resistance and flexibility are performed. The results reveal that the proposed anticipated hash scheme is simple, efficient and holds comparable capabilities when compared with some recent chaos-based hash algorithms.

  6. Orbital-angular-momentum mode-group multiplexed transmission over a graded-index ring-core fiber based on receive diversity and maximal ratio combining

    NASA Astrophysics Data System (ADS)

    Zhang, Junwei; Zhu, Guoxuan; Liu, Jie; Wu, Xiong; Zhu, Jiangbo; Du, Cheng; Luo, Wenyong; Chen, Yujie; Yu, Siyuan

    2018-02-01

    An orbital-angular-momentum (OAM) mode-group multiplexing (MGM) scheme based on a graded-index ring-core fiber (GIRCF) is proposed, in which a single-input two-output (or receive diversity) architecture is designed for each MG channel and simple digital signal processing (DSP) is utilized to adaptively resist the mode partition noise resulting from random intra-group mode crosstalk. There is no need of complex multiple-input multiple-output (MIMO) equalization in this scheme. Furthermore, the signal-to-noise ratio (SNR) of the received signals can be improved if a simple maximal ratio combining (MRC) technique is employed on the receiver side to efficiently take advantage of the diversity gain of receiver. Intensity-modulated direct-detection (IM-DD) systems transmitting three OAM mode groups with total 100-Gb/s discrete multi-tone (DMT) signals over a 1-km GIRCF and two OAM mode groups with total 40-Gb/s DMT signals over an 18-km GIRCF are experimentally demonstrated, respectively, to confirm the feasibility of our proposed OAM-MGM scheme.

  7. Smith predictor based-sliding mode controller for integrating processes with elevated deadtime.

    PubMed

    Camacho, Oscar; De la Cruz, Francisco

    2004-04-01

    An approach to control integrating processes with elevated deadtime using a Smith predictor sliding mode controller is presented. A PID sliding surface and an integrating first-order plus deadtime model have been used to synthesize the controller. Since the performance of existing controllers with a Smith predictor decrease in the presence of modeling errors, this paper presents a simple approach to combining the Smith predictor with the sliding mode concept, which is a proven, simple, and robust procedure. The proposed scheme has a set of tuning equations as a function of the characteristic parameters of the model. For implementation of our proposed approach, computer based industrial controllers that execute PID algorithms can be used. The performance and robustness of the proposed controller are compared with the Matausek-Micić scheme for linear systems using simulations.

  8. Implementing an ancilla-free 1→M economical phase-covariant quantum cloning machine with superconducting quantum-interference devices in cavity QED

    NASA Astrophysics Data System (ADS)

    Yu, Long-Bao; Zhang, Wen-Hai; Ye, Liu

    2007-09-01

    We propose a simple scheme to realize 1→M economical phase-covariant quantum cloning machine (EPQCM) with superconducting quantum interference device (SQUID) qubits. In our scheme, multi-SQUIDs are fixed into a microwave cavity by adiabatic passage for their manipulation. Based on this model, we can realize the EPQCM with high fidelity via adiabatic quantum computation.

  9. Local SIMPLE multi-atlas-based segmentation applied to lung lobe detection on chest CT

    NASA Astrophysics Data System (ADS)

    Agarwal, M.; Hendriks, E. A.; Stoel, B. C.; Bakker, M. E.; Reiber, J. H. C.; Staring, M.

    2012-02-01

    For multi atlas-based segmentation approaches, a segmentation fusion scheme which considers local performance measures may be more accurate than a method which uses a global performance measure. We improve upon an existing segmentation fusion method called SIMPLE and extend it to be localized and suitable for multi-labeled segmentations. We demonstrate the algorithm performance on 23 CT scans of COPD patients using a leave-one- out experiment. Our algorithm performs significantly better (p < 0.01) than majority voting, STAPLE, and SIMPLE, with a median overlap of the fissure of 0.45, 0.48, 0.55 and 0.6 for majority voting, STAPLE, SIMPLE, and the proposed algorithm, respectively.

  10. A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks.

    PubMed

    Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen

    2018-05-12

    Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity.

  11. A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks

    PubMed Central

    Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen

    2018-01-01

    Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity. PMID:29757244

  12. Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Strum, R.; Stiles, D.; Long, C.; Rakhman, A.; Blokland, W.; Winder, D.; Riemer, B.; Wendel, M.

    2018-03-01

    We describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. The proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry-Perot sensors for measurement of strains and vibrations.

  13. Optimizing congestion and emissions via tradable credit charge and reward scheme without initial credit allocations

    NASA Astrophysics Data System (ADS)

    Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang

    2017-01-01

    This paper investigates the revenue-neutral tradable credit charge and reward scheme without initial credit allocations that can reassign network traffic flow patterns to optimize congestion and emissions. First, we prove the existence of the proposed schemes and further decentralize the minimum emission flow pattern to user equilibrium. Moreover, we design the solving method of the proposed credit scheme for minimum emission problem. Second, we investigate the revenue-neutral tradable credit charge and reward scheme without initial credit allocations for bi-objectives to obtain the Pareto system optimum flow patterns of congestion and emissions; and present the corresponding solutions are located in the polyhedron constituted by some inequalities and equalities system. Last, numerical example based on a simple traffic network is adopted to obtain the proposed credit schemes and verify they are revenue-neutral.

  14. Mixed biodiversity benefits of agri-environment schemes in five European countries.

    PubMed

    Kleijn, D; Baquero, R A; Clough, Y; Díaz, M; De Esteban, J; Fernández, F; Gabriel, D; Herzog, F; Holzschuh, A; Jöhl, R; Knop, E; Kruess, A; Marshall, E J P; Steffan-Dewenter, I; Tscharntke, T; Verhulst, J; West, T M; Yela, J L

    2006-03-01

    Agri-environment schemes are an increasingly important tool for the maintenance and restoration of farmland biodiversity in Europe but their ecological effects are poorly known. Scheme design is partly based on non-ecological considerations and poses important restrictions on evaluation studies. We describe a robust approach to evaluate agri-environment schemes and use it to evaluate the biodiversity effects of agri-environment schemes in five European countries. We compared species density of vascular plants, birds, bees, grasshoppers and crickets, and spiders on 202 paired fields, one with an agri-environment scheme, the other conventionally managed. In all countries, agri-environment schemes had marginal to moderately positive effects on biodiversity. However, uncommon species benefited in only two of five countries and species listed in Red Data Books rarely benefited from agri-environment schemes. Scheme objectives may need to differentiate between biodiversity of common species that can be enhanced with relatively simple modifications in farming practices and diversity or abundance of endangered species which require more elaborate conservation measures.

  15. Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.

    PubMed

    Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin

    2005-03-01

    This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.

  16. Simple 2.5 GHz time-bin quantum key distribution

    NASA Astrophysics Data System (ADS)

    Boaron, Alberto; Korzh, Boris; Houlmann, Raphael; Boso, Gianluca; Rusca, Davide; Gray, Stuart; Li, Ming-Jun; Nolan, Daniel; Martin, Anthony; Zbinden, Hugo

    2018-04-01

    We present a 2.5 GHz quantum key distribution setup with the emphasis on a simple experimental realization. It features a three-state time-bin protocol based on a pulsed diode laser and a single intensity modulator. Implementing an efficient one-decoy scheme and finite-key analysis, we achieve record breaking secret key rates of 1.5 kbps over 200 km of standard optical fibers.

  17. Scalable quantum computation scheme based on quantum-actuated nuclear-spin decoherence-free qubits

    NASA Astrophysics Data System (ADS)

    Dong, Lihong; Rong, Xing; Geng, Jianpei; Shi, Fazhan; Li, Zhaokai; Duan, Changkui; Du, Jiangfeng

    2017-11-01

    We propose a novel theoretical scheme of quantum computation. Nuclear spin pairs are utilized to encode decoherence-free (DF) qubits. A nitrogen-vacancy center serves as a quantum actuator to initialize, readout, and quantum control the DF qubits. The realization of CNOT gates between two DF qubits are also presented. Numerical simulations show high fidelities of all these processes. Additionally, we discuss the potential of scalability. Our scheme reduces the challenge of classical interfaces from controlling and observing complex quantum systems down to a simple quantum actuator. It also provides a novel way to handle complex quantum systems.

  18. Distributed Fair Auto Rate Medium Access Control for IEEE 802.11 Based WLANs

    NASA Astrophysics Data System (ADS)

    Zhu, Yanfeng; Niu, Zhisheng

    Much research has shown that a carefully designed auto rate medium access control can utilize the underlying physical multi-rate capability to exploit the time-variation of the channel. In this paper, we develop a simple analytical model to elucidate the rule that maximizes the throughput of RTS/CTS based multi-rate wireless local area networks. Based on the discovered rule, we propose two distributed fair auto rate medium access control schemes called FARM and FARM+ from the view-point of throughput fairness and time-share fairness, respectively. With the proposed schemes, after receiving a RTS frame, the receiver selectively returns the CTS frame to inform the transmitter the maximum feasible rate probed by the signal-to-noise ratio of the received RTS frame. The key feature of the proposed schemes is that they are capable of maintaining throughput/time-share fairness in asymmetric situation where the distribution of SNR varies with stations. Extensive simulation results show that the proposed schemes outperform the existing throughput/time-share fair auto rate schemes in time-varying channel conditions.

  19. Dual frequency comb metrology with one fiber laser

    NASA Astrophysics Data System (ADS)

    Zhao, Xin; Takeshi, Yasui; Zheng, Zheng

    2016-11-01

    Optical metrology techniques based on dual optical frequency combs have emerged as a hotly studied area targeting a wide range of applications from optical spectroscopy to microwave and terahertz frequency measurement. Generating two sets of high-quality comb lines with slightly different comb-tooth spacings with high mutual coherence and stability is the key to most of the dual-comb schemes. The complexity and costs of such laser sources and the associated control systems to lock the two frequency combs hinder the wider adoption of such techniques. Here we demonstrate a very simple and rather different approach to tackle such a challenge. By employing novel laser cavity designs in a mode-locked fiber laser, a simple fiber laser setup could emit dual-comb pulse output with high stability and good coherence between the pulse trains. Based on such lasers, comb-tooth-resolved dual-comb optical spectroscopy is demonstrated. Picometer spectral resolving capability could be realized with a fiber-optic setup and a low-cost data acquisition system and standard algorithms. Besides, the frequency of microwave signals over a large range can be determined based on a simple setup. Our results show the capability of such single-fiber-laser-based dual-comb scheme to reduce the complexity and cost of dual-comb systems with excellent quality for different dual-comb applications.

  20. Positivity-preserving numerical schemes for multidimensional advection

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Macvean, M. K.; Lock, A. P.

    1993-01-01

    This report describes the construction of an explicit, single time-step, conservative, finite-volume method for multidimensional advective flow, based on a uniformly third-order polynomial interpolation algorithm (UTOPIA). Particular attention is paid to the problem of flow-to-grid angle-dependent, anisotropic distortion typical of one-dimensional schemes used component-wise. The third-order multidimensional scheme automatically includes certain cross-difference terms that guarantee good isotropy (and stability). However, above first-order, polynomial-based advection schemes do not preserve positivity (the multidimensional analogue of monotonicity). For this reason, a multidimensional generalization of the first author's universal flux-limiter is sought. This is a very challenging problem. A simple flux-limiter can be found; but this introduces strong anisotropic distortion. A more sophisticated technique, limiting part of the flux and then restoring the isotropy-maintaining cross-terms afterwards, gives more satisfactory results. Test cases are confined to two dimensions; three-dimensional extensions are briefly discussed.

  1. Quantum Watermarking Scheme Based on INEQR

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Zhou, Yang; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou

    2018-04-01

    Quantum watermarking technology protects copyright by embedding invisible quantum signal in quantum multimedia data. In this paper, a watermarking scheme based on INEQR was presented. Firstly, the watermark image is extended to achieve the requirement of embedding carrier image. Secondly, the swap and XOR operation is used on the processed pixels. Since there is only one bit per pixel, XOR operation can achieve the effect of simple encryption. Thirdly, both the watermark image extraction and embedding operations are described, where the key image, swap operation and LSB algorithm are used. When the embedding is made, the binary image key is changed. It means that the watermark has been embedded. Of course, if the watermark image is extracted, the key's state need detected. When key's state is |1>, this extraction operation is carried out. Finally, for validation of the proposed scheme, both the Signal-to-noise ratio (PSNR) and the security of the scheme are analyzed.

  2. The Effect of Performance-Based Financial Incentives on Improving Health Care Provision in Burundi: A Controlled Cohort Study

    PubMed Central

    Rudasingwa, Martin; Soeters, Robert; Bossuyt, Michel

    2015-01-01

    To strengthen the health care delivery, the Burundian Government in collaboration with international NGOs piloted performance-based financing (PBF) in 2006. The health facilities were assigned - by using a simple matching method - to begin PBF scheme or to continue with the traditional input-based funding. Our objective was to analyse the effect of that PBF scheme on the quality of health services between 2006 and 2008. We conducted the analysis in 16 health facilities with PBF scheme and 13 health facilities without PBF scheme. We analysed the PBF effect by using 58 composite quality indicators of eight health services: Care management, outpatient care, maternity care, prenatal care, family planning, laboratory services, medicines management and materials management. The differences in quality improvement in the two groups of health facilities were performed applying descriptive statistics, a paired non-parametric Wilcoxon Signed Ranks test and a simple difference-in-difference approach at a significance level of 5%. We found an improvement of the quality of care in the PBF group and a significant deterioration in the non-PBF group in the same four health services: care management, outpatient care, maternity care, and prenatal care. The findings suggest a PBF effect of between 38 and 66 percentage points (p<0.001) in the quality scores of care management, outpatient care, prenatal care, and maternal care. We found no PBF effect on clinical support services: laboratory services, medicines management, and material management. The PBF scheme in Burundi contributed to the improvement of the health services that were strongly under the control of medical personnel (physicians and nurses) in a short time of two years. The clinical support services that did not significantly improved were strongly under the control of laboratory technicians, pharmacists and non-medical personnel. PMID:25948432

  3. Design of a global soil moisture initialization procedure for the simple biosphere model

    NASA Technical Reports Server (NTRS)

    Liston, G. E.; Sud, Y. C.; Walker, G. K.

    1993-01-01

    Global soil moisture and land-surface evapotranspiration fields are computed using an analysis scheme based on the Simple Biosphere (SiB) soil-vegetation-atmosphere interaction model. The scheme is driven with observed precipitation, and potential evapotranspiration, where the potential evapotranspiration is computed following the surface air temperature-potential evapotranspiration regression of Thomthwaite (1948). The observed surface air temperature is corrected to reflect potential (zero soil moisture stress) conditions by letting the ratio of actual transpiration to potential transpiration be a function of normalized difference vegetation index (NDVI). Soil moisture, evapotranspiration, and runoff data are generated on a daily basis for a 10-year period, January 1979 through December 1988, using observed precipitation gridded at a 4 deg by 5 deg resolution.

  4. Economics of internal and external energy storage in solar power plant operation

    NASA Technical Reports Server (NTRS)

    Manvi, R.; Fujita, T.

    1977-01-01

    A simple approach is formulated to investigate the effect of energy storage on the bus-bar electrical energy cost of solar thermal power plants. Economic analysis based on this approach does not require detailed definition of a specific storage system. A wide spectrum of storage system candidates ranging from hot water to superconducting magnets can be studied based on total investment and a rough knowledge of energy in and out efficiencies. Preliminary analysis indicates that internal energy storage (thermal) schemes offer better opportunities for energy cost reduction than external energy storage (nonthermal) schemes for solar applications. Based on data and assumptions used in JPL evaluation studies, differential energy costs due to storage are presented for a 100 MWe solar power plant by varying the energy capacity. The simple approach presented in this paper provides useful insight regarding the operation of energy storage in solar power plant applications, while also indicating a range of design parameters where storage can be cost effective.

  5. Fault-tolerant simple quantum-bit commitment unbreakable by individual attacks

    NASA Astrophysics Data System (ADS)

    Shimizu, Kaoru; Imoto, Nobuyuki

    2002-03-01

    This paper proposes a simple scheme for quantum-bit commitment that is secure against individual particle attacks, where a sender is unable to use quantum logical operations to manipulate multiparticle entanglement for performing quantum collective and coherent attacks. Our scheme employs a cryptographic quantum communication channel defined in a four-dimensional Hilbert space and can be implemented by using single-photon interference. For an ideal case of zero-loss and noiseless quantum channels, our basic scheme relies only on the physical features of quantum states. Moreover, as long as the bit-flip error rates are sufficiently small (less than a few percent), we can improve our scheme and make it fault tolerant by adopting simple error-correcting codes with a short length. Compared with the well-known Brassard-Crepeau-Jozsa-Langlois 1993 (BCJL93) protocol, our scheme is mathematically far simpler, more efficient in terms of transmitted photon number, and better tolerant of bit-flip errors.

  6. A physically-based retrieval of cloud liquid water from SSM/I measurements

    NASA Technical Reports Server (NTRS)

    Greenwald, Thomas J.; Stephens, Graeme L.; Vonder Haar, Thomas H.

    1992-01-01

    A simple physical scheme is proposed for retrieving cloud liquid water over the ice-free global oceans from Special Sensor Microwave/Imager (SSM/I) observations. Details of the microwave retrieval scheme are discussed, and the microwave-derived liquid water amounts are compared with the ground radiometer and AVHRR-derived liquid water for stratocumulus clouds off the coast of California. Global distributions of the liquid water path derived by the method proposed here are presented.

  7. A Simple Algebraic Grid Adaptation Scheme with Applications to Two- and Three-dimensional Flow Problems

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.; Lytle, John K.

    1989-01-01

    An algebraic adaptive grid scheme based on the concept of arc equidistribution is presented. The scheme locally adjusts the grid density based on gradients of selected flow variables from either finite difference or finite volume calculations. A user-prescribed grid stretching can be specified such that control of the grid spacing can be maintained in areas of known flowfield behavior. For example, the grid can be clustered near a wall for boundary layer resolution and made coarse near the outer boundary of an external flow. A grid smoothing technique is incorporated into the adaptive grid routine, which is found to be more robust and efficient than the weight function filtering technique employed by other researchers. Since the present algebraic scheme requires no iteration or solution of differential equations, the computer time needed for grid adaptation is trivial, making the scheme useful for three-dimensional flow problems. Applications to two- and three-dimensional flow problems show that a considerable improvement in flowfield resolution can be achieved by using the proposed adaptive grid scheme. Although the scheme was developed with steady flow in mind, it is a good candidate for unsteady flow computations because of its efficiency.

  8. A Lithology Based Map Unit Schema For Onegeology Regional Geologic Map Integration

    NASA Astrophysics Data System (ADS)

    Moosdorf, N.; Richard, S. M.

    2012-12-01

    A system of lithogenetic categories for a global lithological map (GLiM, http://www.ifbm.zmaw.de/index.php?id=6460&L=3) has been compiled based on analysis of lithology/genesis categories for regional geologic maps for the entire globe. The scheme is presented for discussion and comment. Analysis of units on a variety of regional geologic maps indicates that units are defined based on assemblages of rock types, as well as their genetic type. In this compilation of continental geology, outcropping surface materials are dominantly sediment/sedimentary rock; major subdivisions of the sedimentary category include clastic sediment, carbonate sedimentary rocks, clastic sedimentary rocks, mixed carbonate and clastic sedimentary rock, colluvium and residuum. Significant areas of mixed igneous and metamorphic rock are also present. A system of global categories to characterize the lithology of regional geologic units is important for Earth System models of matter fluxes to soils, ecosystems, rivers and oceans, and for regional analysis of Earth surface processes at global scale. Because different applications of the classification scheme will focus on different lithologic constituents in mixed units, an ontology-type representation of the scheme that assigns properties to the units in an analyzable manner will be pursued. The OneGeology project is promoting deployment of geologic map services at million scale for all nations. Although initial efforts are commonly simple scanned map WMS services, the intention is to move towards data-based map services that categorize map units with standard vocabularies to allow use of a common map legend for better visual integration of the maps (e.g. see OneGeology Europe, http://onegeology-europe.brgm.fr/ geoportal/ viewer.jsp). Current categorization of regional units with a single lithology from the CGI SimpleLithology (http://resource.geosciml.org/201202/ Vocab2012html/ SimpleLithology201012.html) vocabulary poorly captures the lithologic character of such units in a meaningful way. A lithogenetic unit category scheme accessible as a GeoSciML-portrayal-based OGC Styled Layer Description resource is key to enabling OneGeology (http://oneGeology.org) geologic map services to achieve a high degree of visual harmonization.

  9. A group communication approach for mobile computing mobile channel: An ISIS tool for mobile services

    NASA Astrophysics Data System (ADS)

    Cho, Kenjiro; Birman, Kenneth P.

    1994-05-01

    This paper examines group communication as an infrastructure to support mobility of users, and presents a simple scheme to support user mobility by means of switching a control point between replicated servers. We describe the design and implementation of a set of tools, called Mobile Channel, for use with the ISIS system. Mobile Channel is based on a combination of the two replication schemes: the primary-backup approach and the state machine approach. Mobile Channel implements a reliable one-to-many FIFO channel, in which a mobile client sees a single reliable server; servers, acting as a state machine, see multicast messages from clients. Migrations of mobile clients are handled as an intentional primary switch, and hand-offs or server failures are completely masked to mobile clients. To achieve high performance, servers are replicated at a sliding-window level. Our scheme provides a simple abstraction of migration, eliminates complicated hand-off protocols, provides fault-tolerance and is implemented within the existing group communication mechanism.

  10. Unconditionally Secure Credit/Debit Card Chip Scheme and Physical Unclonable Function

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Entesari, Kamran; Granqvist, Claes-Göran; Kwan, Chiman

    The statistical-physics-based Kirchhoff-law-Johnson-noise (KLJN) key exchange offers a new and simple unclonable system for credit/debit card chip authentication and payment. The key exchange, the authentication and the communication are unconditionally secure so that neither mathematics- nor statistics-based attacks are able to crack the scheme. The ohmic connection and the short wiring lengths between the chips in the card and the terminal constitute an ideal setting for the KLJN protocol, and even its simplest versions offer unprecedented security and privacy for credit/debit card chips and applications of physical unclonable functions (PUFs).

  11. A symmetric metamaterial element-based RF biosensor for rapid and label-free detection

    NASA Astrophysics Data System (ADS)

    Lee, Hee-Jo; Lee, Jung-Hyun; Jung, Hyo-Il

    2011-10-01

    A symmetric metamaterial element-based RF biosensing scheme is experimentally demonstrated by detecting biomolecular binding between a prostate-specific antigen (PSA) and its antibody. The metamaterial element in a high-impedance microstrip line shows an intrinsic S21 resonance having a Q-factor of 55. The frequency shift with PSA concentration, i.e., 100 ng/ml, 10 ng/ml, and 1 ng/ml, is observed and the changes are Δf ≈ 20 MHz, 10 MHz, and 5 MHz, respectively. The proposed biosensor offers advantages of label-free detection, a simple and direct scheme, and cost-efficient fabrication.

  12. Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Strum, R.; Stiles, D.

    In this paper, we describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. Finally, the proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry–Perot sensors for measurement of strains and vibrations.

  13. Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors

    DOE PAGES

    Liu, Y.; Strum, R.; Stiles, D.; ...

    2017-11-20

    In this paper, we describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. Finally, the proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry–Perot sensors for measurement of strains and vibrations.

  14. Icing Branch Current Research Activities in Icing Physics

    NASA Technical Reports Server (NTRS)

    Vargas, Mario

    2009-01-01

    Current development: A grid block transformation scheme which allows the input of grids in arbitrary reference frames, the use of mirror planes, and grids with relative velocities has been developed. A simple ice crystal and sand particle bouncing scheme has been included. Added an SLD splashing model based on that developed by William Wright for the LEWICE 3.2.2 software. A new area based collection efficiency algorithm will be incorporated which calculates trajectories from inflow block boundaries to outflow block boundaries. This method will be used for calculating and passing collection efficiency data between blade rows for turbo-machinery calculations.

  15. Numerical study of read scheme in one-selector one-resistor crossbar array

    NASA Astrophysics Data System (ADS)

    Kim, Sungho; Kim, Hee-Dong; Choi, Sung-Jin

    2015-12-01

    A comprehensive numerical circuit analysis of read schemes of a one selector-one resistance change memory (1S1R) crossbar array is carried out. Three schemes-the ground, V/2, and V/3 schemes-are compared with each other in terms of sensing margin and power consumption. Without the aid of a complex analytical approach or SPICE-based simulation, a simple numerical iteration method is developed to simulate entire current flows and node voltages within a crossbar array. Understanding such phenomena is essential in successfully evaluating the electrical specifications of selectors for suppressing intrinsic drawbacks of crossbar arrays, such as sneaky current paths and series line resistance problems. This method provides a quantitative tool for the accurate analysis of crossbar arrays and provides guidelines for developing an optimal read scheme, array configuration, and selector device specifications.

  16. Simple Numerical Modelling for Gasdynamic Design of Wave Rotors

    NASA Astrophysics Data System (ADS)

    Okamoto, Koji; Nagashima, Toshio

    The precise estimation of pressure waves generated in the passages is a crucial factor in wave rotor design. However, it is difficult to estimate the pressure wave analytically, e.g. by the method of characteristics, because the mechanism of pressure-wave generation and propagation in the passages is extremely complicated as compared to that in a shock tube. In this study, a simple numerical modelling scheme was developed to facilitate the design procedure. This scheme considers the three dominant factors in the loss mechanism —gradual passage opening, wall friction and leakage— for simulating the pressure waves precisely. The numerical scheme itself is based on the one-dimensional Euler equations with appropriate source terms to reduce the calculation time. The modelling of these factors was verified by comparing the results with those of a two-dimensional numerical simulation, which were previously validated by the experimental data in our previous study. Regarding wave rotor miniaturization, the leakage flow effect, which involves the interaction between adjacent cells, was investigated extensively. A port configuration principle was also examined and analyzed in detail to verify the applicability of the present numerical modelling scheme to the wave rotor design.

  17. An approach to multivariable control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    The paper presents simple schemes for multivariable control of multiple-joint robot manipulators in joint and Cartesian coordinates. The joint control scheme consists of two independent multivariable feedforward and feedback controllers. The feedforward controller is the minimal inverse of the linearized model of robot dynamics and contains only proportional-double-derivative (PD2) terms - implying feedforward from the desired position, velocity and acceleration. This controller ensures that the manipulator joint angles track any reference trajectories. The feedback controller is of proportional-integral-derivative (PID) type and is designed to achieve pole placement. This controller reduces any initial tracking error to zero as desired and also ensures that robust steady-state tracking of step-plus-exponential trajectories is achieved by the joint angles. Simple and explicit expressions of computation of the feedforward and feedback gains are obtained based on the linearized model of robot dynamics. This leads to computationally efficient schemes for either on-line gain computation or off-line gain scheduling to account for variations in the linearized robot model due to changes in the operating point. The joint control scheme is extended to direct control of the end-effector motion in Cartesian space. Simulation results are given for illustration.

  18. Photonic generation of ultra-wideband doublet pulse using a semiconductor-optical-amplifier based polarization-diversified loop.

    PubMed

    Luo, Bowen; Dong, Jianji; Yu, Yuan; Yang, Ting; Zhang, Xinliang

    2012-06-15

    We propose and demonstrate a novel scheme of ultra-wideband (UWB) doublet pulse generation using a semiconductor optical amplifier (SOA) based polarization-diversified loop (PDL) without any assistant light. In our scheme, the incoming gaussian pulse is split into two parts by the PDL, and each of them is intensity modulated by the other due to cross-gain modulation (XGM) in the SOA. Then, both parts are recombined with incoherent summation to form a UWB doublet pulse. Bi-polar UWB doublet pulse generation is demonstrated using an inverted gaussian pulse injection. Moreover, pulse amplitude modulation of UWB doublet is also experimentally demonstrated. Our scheme shows some advantages, such as simple implementation without assistant light and single optical carrier operation with good fiber dispersion tolerance.

  19. Time dependent density functional calculation of plasmon response in clusters

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Zhang, Feng-Shou; Eric, Suraud

    2003-02-01

    We have introduced a theoretical scheme for the efficient description of the optical response of a cluster based on the time-dependent density functional theory. The practical implementation is done by means of the fully fledged time-dependent local density approximation scheme, which is solved directly in the time domain without any linearization. As an example we consider the simple Na2 cluster and compute its surface plasmon photoabsorption cross section, which is in good agreement with the experiments.

  20. Quantum annealing of the traveling-salesman problem.

    PubMed

    Martonák, Roman; Santoro, Giuseppe E; Tosatti, Erio

    2004-11-01

    We propose a path-integral Monte Carlo quantum annealing scheme for the symmetric traveling-salesman problem, based on a highly constrained Ising-like representation, and we compare its performance against standard thermal simulated annealing. The Monte Carlo moves implemented are standard, and consist in restructuring a tour by exchanging two links (two-opt moves). The quantum annealing scheme, even with a drastically simple form of kinetic energy, appears definitely superior to the classical one, when tested on a 1002-city instance of the standard TSPLIB.

  1. A security and privacy preserving e-prescription system based on smart cards.

    PubMed

    Hsu, Chien-Lung; Lu, Chung-Fu

    2012-12-01

    In 2002, Ateniese and Medeiros proposed an e-prescription system, in which the patient can store e-prescription and related information using smart card. Latter, Yang et al. proposed a novel smart-card based e-prescription system based on Ateniese and Medeiros's system in 2004. Yang et al. considered the privacy issues of prescription data and adopted the concept of a group signature to provide patient's privacy protection. To make the e-prescription system more realistic, they further applied a proxy signature to allow a patient to delegate his signing capability to other people. This paper proposed a novel security and privacy preserving e-prescription system model based on smart cards. A new role, chemist, is included in the system model for settling the medicine dispute. We further presented a concrete identity-based (ID-based) group signature scheme and an ID-based proxy signature scheme to realize the proposed model. Main property of an ID-based system is that public key is simple user's identity and can be verified without extra public key certificates. Our ID-based group signature scheme can allow doctors to sign e-prescription anonymously. In a case of a medical dispute, identities of the doctors can be identified. The proposed ID-based proxy signature scheme can improve signing delegation and allows a delegation chain. The proposed e-prescription system based on our proposed two cryptographic schemes is more practical and efficient than Yang et al.'s system in terms of security, communication overheads, computational costs, practical considerations.

  2. Robust watermarking scheme for binary images using a slice-based large-cluster algorithm with a Hamming Code

    NASA Astrophysics Data System (ADS)

    Chen, Wen-Yuan; Liu, Chen-Chung

    2006-01-01

    The problems with binary watermarking schemes are that they have only a small amount of embeddable space and are not robust enough. We develop a slice-based large-cluster algorithm (SBLCA) to construct a robust watermarking scheme for binary images. In SBLCA, a small-amount cluster selection (SACS) strategy is used to search for a feasible slice in a large-cluster flappable-pixel decision (LCFPD) method, which is used to search for the best location for concealing a secret bit from a selected slice. This method has four major advantages over the others: (a) SBLCA has a simple and effective decision function to select appropriate concealment locations, (b) SBLCA utilizes a blind watermarking scheme without the original image in the watermark extracting process, (c) SBLCA uses slice-based shuffling capability to transfer the regular image into a hash state without remembering the state before shuffling, and finally, (d) SBLCA has enough embeddable space that every 64 pixels could accommodate a secret bit of the binary image. Furthermore, empirical results on test images reveal that our approach is a robust watermarking scheme for binary images.

  3. A pyramid scheme for three-dimensional diffusion equations on polyhedral meshes

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Hang, Xudeng; Yuan, Guangwei

    2017-12-01

    In this paper, a new cell-centered finite volume scheme is proposed for three-dimensional diffusion equations on polyhedral meshes, which is called as pyramid scheme (P-scheme). The scheme is designed for polyhedral cells with nonplanar cell-faces. The normal flux on a nonplanar cell-face is discretized on a planar face, which is determined by a simple optimization procedure. The resulted discrete form of the normal flux involves only cell-centered and cell-vertex unknowns, and is free from face-centered unknowns. In the case of hexahedral meshes with skewed nonplanar cell-faces, a quite simple expression is obtained for the discrete normal flux. Compared with the second order accurate O-scheme [31], the P-scheme is more robust and the discretization cost is reduced remarkably. Numerical results are presented to show the performance of the P-scheme on various kinds of distorted meshes. In particular, the P-scheme is shown to be second order accurate.

  4. A simple algorithm to improve the performance of the WENO scheme on non-uniform grids

    NASA Astrophysics Data System (ADS)

    Huang, Wen-Feng; Ren, Yu-Xin; Jiang, Xiong

    2018-02-01

    This paper presents a simple approach for improving the performance of the weighted essentially non-oscillatory (WENO) finite volume scheme on non-uniform grids. This technique relies on the reformulation of the fifth-order WENO-JS (WENO scheme presented by Jiang and Shu in J. Comput. Phys. 126:202-228, 1995) scheme designed on uniform grids in terms of one cell-averaged value and its left and/or right interfacial values of the dependent variable. The effect of grid non-uniformity is taken into consideration by a proper interpolation of the interfacial values. On non-uniform grids, the proposed scheme is much more accurate than the original WENO-JS scheme, which was designed for uniform grids. When the grid is uniform, the resulting scheme reduces to the original WENO-JS scheme. In the meantime, the proposed scheme is computationally much more efficient than the fifth-order WENO scheme designed specifically for the non-uniform grids. A number of numerical test cases are simulated to verify the performance of the present scheme.

  5. An adaptive Cartesian control scheme for manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.

  6. A pseudospectra-based approach to non-normal stability of embedded boundary methods

    NASA Astrophysics Data System (ADS)

    Rapaka, Narsimha; Samtaney, Ravi

    2017-11-01

    We present non-normal linear stability of embedded boundary (EB) methods employing pseudospectra and resolvent norms. Stability of the discrete linear wave equation is characterized in terms of the normalized distance of the EB to the nearest ghost node (α) in one and two dimensions. An important objective is that the CFL condition based on the Cartesian grid spacing remains unaffected by the EB. We consider various discretization methods including both central and upwind-biased schemes. Stability is guaranteed when α <=αmax ranges between 0.5 and 0.77 depending on the discretization scheme. Also, the stability characteristics remain the same in both one and two dimensions. Sharper limits on the sufficient conditions for stability are obtained based on the pseudospectral radius (the Kreiss constant) than the restrictive limits based on the usual singular value decomposition analysis. We present a simple and robust reclassification scheme for the ghost cells (``hybrid ghost cells'') to ensure Lax stability of the discrete systems. This has been tested successfully for both low and high order discretization schemes with transient growth of at most O (1). Moreover, we present a stable, fourth order EB reconstruction scheme. Supported by the KAUST Office of Competitive Research Funds under Award No. URF/1/1394-01.

  7. Quantum-secret-sharing scheme based on local distinguishability of orthogonal multiqudit entangled states

    NASA Astrophysics Data System (ADS)

    Wang, Jingtao; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-02-01

    In this study, we propose the concept of judgment space to investigate the quantum-secret-sharing scheme based on local distinguishability (called LOCC-QSS). Because of the proposing of this conception, the property of orthogonal mutiqudit entangled states under restricted local operation and classical communication (LOCC) can be described more clearly. According to these properties, we reveal that, in the previous (k ,n )-threshold LOCC-QSS scheme, there are two required conditions for the selected quantum states to resist the unambiguous attack: (i) their k -level judgment spaces are orthogonal, and (ii) their (k -1 )-level judgment spaces are equal. Practically, if k

  8. Simple Peer-to-Peer SIP Privacy

    NASA Astrophysics Data System (ADS)

    Koskela, Joakim; Tarkoma, Sasu

    In this paper, we introduce a model for enhancing privacy in peer-to-peer communication systems. The model is based on data obfuscation, preventing intermediate nodes from tracking calls, while still utilizing the shared resources of the peer network. This increases security when moving between untrusted, limited and ad-hoc networks, when the user is forced to rely on peer-to-peer schemes. The model is evaluated using a Host Identity Protocol-based prototype on mobile devices, and is found to provide good privacy, especially when combined with a source address hiding scheme. The contribution of this paper is to present the model and results obtained from its use, including usability considerations.

  9. Fish tracking by combining motion based segmentation and particle filtering

    NASA Astrophysics Data System (ADS)

    Bichot, E.; Mascarilla, L.; Courtellemont, P.

    2006-01-01

    In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.

  10. Decentralized digital adaptive control of robot motion

    NASA Technical Reports Server (NTRS)

    Tarokh, M.

    1990-01-01

    A decentralized model reference adaptive scheme is developed for digital control of robot manipulators. The adaptation laws are derived using hyperstability theory, which guarantees asymptotic trajectory tracking despite gross robot parameter variations. The control scheme has a decentralized structure in the sense that each local controller receives only its joint angle measurement to produce its joint torque. The independent joint controllers have simple structures and can be programmed using a very simple and computationally fast algorithm. As a result, the scheme is suitable for real-time motion control.

  11. Comparison of ACCENT 2000 Shuttle Plume Data with SIMPLE Model Predictions

    NASA Astrophysics Data System (ADS)

    Swaminathan, P. K.; Taylor, J. C.; Ross, M. N.; Zittel, P. F.; Lloyd, S. A.

    2001-12-01

    The JHU/APL Stratospheric IMpact of PLume Effluents (SIMPLE)model was employed to analyze the trace species in situ composition data collected during the ACCENT 2000 intercepts of the space shuttle Space Transportation Launch System (STS) rocket plume as a function of time and radial location within the cold plume. The SIMPLE model is initialized using predictions for species depositions calculated using an afterburning model based on standard TDK/SPP nozzle and SPF plume flowfield codes with an expanded chemical kinetic scheme. The time dependent ambient stratospheric chemistry is fully coupled to the plume species evolution whose transport is based on empirically derived diffusion. Model/data comparisons are encouraging through capturing observed local ozone recovery times as well as overall morphology of chlorine chemistry.

  12. Quantum Stabilizer Codes Can Realize Access Structures Impossible by Classical Secret Sharing

    NASA Astrophysics Data System (ADS)

    Matsumoto, Ryutaroh

    We show a simple example of a secret sharing scheme encoding classical secret to quantum shares that can realize an access structure impossible by classical information processing with limitation on the size of each share. The example is based on quantum stabilizer codes.

  13. On a Non-Reflecting Boundary Condition for Hyperbolic Conservation Laws

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.

    2003-01-01

    A non-reflecting boundary condition (NRBC) for practical computations in fluid dynamics and aeroacoustics is presented. The technique is based on the hyperbolicity of the Euler equation system and the first principle of plane (simple) wave propagation. The NRBC is simple and effective, provided the numerical scheme maintains locally a C(sup 1) continuous solution at the boundary. Several numerical examples in ID, 2D and 3D space are illustrated to demonstrate its robustness in practical computations.

  14. Identification of Organic Colorants in Art Objects by Solution Spectrophotometry: Pigments.

    ERIC Educational Resources Information Center

    Billmeyer, Fred W., Jr.; And Others

    1981-01-01

    Describes solution spectrophotometry as a simple, rapid identification technique for organic paint pigments. Reports research which includes analytical schemes for the extraction and separation of organic pigments based on their solubilities, and the preparation of an extensive reference collection of spectral curves allowing their identification.…

  15. Compact X-ray sources: X-rays from self-reflection

    NASA Astrophysics Data System (ADS)

    Mangles, Stuart P. D.

    2012-05-01

    Laser-based particle acceleration offers a way to reduce the size of hard-X-ray sources. Scientists have now developed a simple scheme that produces a bright flash of hard X-rays by using a single laser pulse both to generate and to scatter an electron beam.

  16. A Microphysics-Based Black Carbon Aging Scheme in a Global Chemical Transport Model: Constraints from HIPPO Observations

    NASA Astrophysics Data System (ADS)

    He, C.; Li, Q.; Liou, K. N.; Qi, L.; Tao, S.; Schwarz, J. P.

    2015-12-01

    Black carbon (BC) aging significantly affects its distributions and radiative properties, which is an important uncertainty source in estimating BC climatic effects. Global models often use a fixed aging timescale for the hydrophobic-to-hydrophilic BC conversion or a simple parameterization. We have developed and implemented a microphysics-based BC aging scheme that accounts for condensation and coagulation processes into a global 3-D chemical transport model (GEOS-Chem). Model results are systematically evaluated by comparing with the HIPPO observations across the Pacific (67°S-85°N) during 2009-2011. We find that the microphysics-based scheme substantially increases the BC aging rate over source regions as compared with the fixed aging timescale (1.2 days), due to the condensation of sulfate and secondary organic aerosols (SOA) and coagulation with pre-existing hydrophilic aerosols. However, the microphysics-based scheme slows down BC aging over Polar regions where condensation and coagulation are rather weak. We find that BC aging is primarily dominated by condensation process that accounts for ~75% of global BC aging, while the coagulation process is important over source regions where a large amount of pre-existing aerosols are available. Model results show that the fixed aging scheme tends to overestimate BC concentrations over the Pacific throughout the troposphere by a factor of 2-5 at different latitudes, while the microphysics-based scheme reduces the discrepancies by up to a factor of 2, particularly in the middle troposphere. The microphysics-based scheme developed in this work decreases BC column total concentrations at all latitudes and seasons, especially over tropical regions, leading to large improvement in model simulations. We are presently analyzing the impact of this scheme on global BC budget and lifetime, quantifying its uncertainty associated with key parameters, and investigating the effects of heterogeneous chemical oxidation on BC aging.

  17. Stokes space modulation format classification based on non-iterative clustering algorithm for coherent optical receivers.

    PubMed

    Mai, Xiaofeng; Liu, Jie; Wu, Xiong; Zhang, Qun; Guo, Changjian; Yang, Yanfu; Li, Zhaohui

    2017-02-06

    A Stokes-space modulation format classification (MFC) technique is proposed for coherent optical receivers by using a non-iterative clustering algorithm. In the clustering algorithm, two simple parameters are calculated to help find the density peaks of the data points in Stokes space and no iteration is required. Correct MFC can be realized in numerical simulations among PM-QPSK, PM-8QAM, PM-16QAM, PM-32QAM and PM-64QAM signals within practical optical signal-to-noise ratio (OSNR) ranges. The performance of the proposed MFC algorithm is also compared with those of other schemes based on clustering algorithms. The simulation results show that good classification performance can be achieved using the proposed MFC scheme with moderate time complexity. Proof-of-concept experiments are finally implemented to demonstrate MFC among PM-QPSK/16QAM/64QAM signals, which confirm the feasibility of our proposed MFC scheme.

  18. A discrete-time adaptive control scheme for robot manipulators

    NASA Technical Reports Server (NTRS)

    Tarokh, M.

    1990-01-01

    A discrete-time model reference adaptive control scheme is developed for trajectory tracking of robot manipulators. The scheme utilizes feedback, feedforward, and auxiliary signals, obtained from joint angle measurement through simple expressions. Hyperstability theory is utilized to derive the adaptation laws for the controller gain matrices. It is shown that trajectory tracking is achieved despite gross robot parameter variation and uncertainties. The method offers considerable design flexibility and enables the designer to improve the performance of the control system by adjusting free design parameters. The discrete-time adaptation algorithm is extremely simple and is therefore suitable for real-time implementation. Simulations and experimental results are given to demonstrate the performance of the scheme.

  19. A new robust control scheme using second order sliding mode and fuzzy logic of a DFIM supplied by two five-level SVPWM inverters

    NASA Astrophysics Data System (ADS)

    Boudjema, Zinelaabidine; Taleb, Rachid; Bounadja, Elhadj

    2017-02-01

    Traditional filed oriented control strategy including proportional-integral (PI) regulator for the speed drive of the doubly fed induction motor (DFIM) have some drawbacks such as parameter tuning complications, mediocre dynamic performances and reduced robustness. Therefore, based on the analysis of the mathematical model of a DFIM supplied by two five-level SVPWM inverters, this paper proposes a new robust control scheme based on super twisting sliding mode and fuzzy logic. The conventional sliding mode control (SMC) has vast chattering effect on the electromagnetic torque developed by the DFIM. In order to resolve this problem, a second order sliding mode technique based on super twisting algorithm and fuzzy logic functions is employed. The validity of the employed approach was tested by using Matlab/Simulink software. Interesting simulation results were obtained and remarkable advantages of the proposed control scheme were exposed including simple design of the control system, reduced chattering as well as the other advantages.

  20. An Expressive, Lightweight and Secure Construction of Key Policy Attribute-Based Cloud Data Sharing Access Control

    NASA Astrophysics Data System (ADS)

    Lin, Guofen; Hong, Hanshu; Xia, Yunhao; Sun, Zhixin

    2017-10-01

    Attribute-based encryption (ABE) is an interesting cryptographic technique for flexible cloud data sharing access control. However, some open challenges hinder its practical application. In previous schemes, all attributes are considered as in the same status while they are not in most of practical scenarios. Meanwhile, the size of access policy increases dramatically with the raise of its expressiveness complexity. In addition, current research hardly notices that mobile front-end devices, such as smartphones, are poor in computational performance while too much bilinear pairing computation is needed for ABE. In this paper, we propose a key-policy weighted attribute-based encryption without bilinear pairing computation (KP-WABE-WB) for secure cloud data sharing access control. A simple weighted mechanism is presented to describe different importance of each attribute. We introduce a novel construction of ABE without executing any bilinear pairing computation. Compared to previous schemes, our scheme has a better performance in expressiveness of access policy and computational efficiency.

  1. A Secret 3D Model Sharing Scheme with Reversible Data Hiding Based on Space Subdivision

    NASA Astrophysics Data System (ADS)

    Tsai, Yuan-Yu

    2016-03-01

    Secret sharing is a highly relevant research field, and its application to 2D images has been thoroughly studied. However, secret sharing schemes have not kept pace with the advances of 3D models. With the rapid development of 3D multimedia techniques, extending the application of secret sharing schemes to 3D models has become necessary. In this study, an innovative secret 3D model sharing scheme for point geometries based on space subdivision is proposed. Each point in the secret point geometry is first encoded into a series of integer values that fall within [0, p - 1], where p is a predefined prime number. The share values are derived by substituting the specified integer values for all coefficients of the sharing polynomial. The surface reconstruction and the sampling concepts are then integrated to derive a cover model with sufficient model complexity for each participant. Finally, each participant has a separate 3D stego model with embedded share values. Experimental results show that the proposed technique supports reversible data hiding and the share values have higher levels of privacy and improved robustness. This technique is simple and has proven to be a feasible secret 3D model sharing scheme.

  2. Data Quality Screening Service

    NASA Technical Reports Server (NTRS)

    Strub, Richard; Lynnes, Christopher; Hearty, Thomas; Won, Young-In; Fox, Peter; Zednik, Stephan

    2013-01-01

    A report describes the Data Quality Screening Service (DQSS), which is designed to help automate the filtering of remote sensing data on behalf of science users. Whereas this process often involves much research through quality documents followed by laborious coding, the DQSS is a Web Service that provides data users with data pre-filtered to their particular criteria, while at the same time guiding the user with filtering recommendations of the cognizant data experts. The DQSS design is based on a formal semantic Web ontology that describes data fields and the quality fields for applying quality control within a data product. The accompanying code base handles several remote sensing datasets and quality control schemes for data products stored in Hierarchical Data Format (HDF), a common format for NASA remote sensing data. Together, the ontology and code support a variety of quality control schemes through the implementation of the Boolean expression with simple, reusable conditional expressions as operands. Additional datasets are added to the DQSS simply by registering instances in the ontology if they follow a quality scheme that is already modeled in the ontology. New quality schemes are added by extending the ontology and adding code for each new scheme.

  3. NMRPipe: a multidimensional spectral processing system based on UNIX pipes.

    PubMed

    Delaglio, F; Grzesiek, S; Vuister, G W; Zhu, G; Pfeifer, J; Bax, A

    1995-11-01

    The NMRPipe system is a UNIX software environment of processing, graphics, and analysis tools designed to meet current routine and research-oriented multidimensional processing requirements, and to anticipate and accommodate future demands and developments. The system is based on UNIX pipes, which allow programs running simultaneously to exchange streams of data under user control. In an NMRPipe processing scheme, a stream of spectral data flows through a pipeline of processing programs, each of which performs one component of the overall scheme, such as Fourier transformation or linear prediction. Complete multidimensional processing schemes are constructed as simple UNIX shell scripts. The processing modules themselves maintain and exploit accurate records of data sizes, detection modes, and calibration information in all dimensions, so that schemes can be constructed without the need to explicitly define or anticipate data sizes or storage details of real and imaginary channels during processing. The asynchronous pipeline scheme provides other substantial advantages, including high flexibility, favorable processing speeds, choice of both all-in-memory and disk-bound processing, easy adaptation to different data formats, simpler software development and maintenance, and the ability to distribute processing tasks on multi-CPU computers and computer networks.

  4. High-fidelity and low-latency mobile fronthaul based on segment-wise TDM and MIMO-interleaved arraying.

    PubMed

    Li, Longsheng; Bi, Meihua; Miao, Xin; Fu, Yan; Hu, Weisheng

    2018-01-22

    In this paper, we firstly demonstrate an advanced arraying scheme in the TDM-based analog mobile fronthaul system to enhance the signal fidelity, in which the segment of the antenna carrier signal (AxC) with an appropriate length is served as the granularity for TDM aggregation. Without introducing extra processing, the entire system can be realized by simple DSP. The theoretical analysis is presented to verify the feasibility of this scheme, and to evaluate its effectiveness, the experiment with ~7-GHz bandwidth and 20 8 × 8 MIMO group signals are conducted. Results show that the segment-wise TDM is completely compatible with the MIMO-interleaved arraying, which is employed in an existing TDM scheme to improve the bandwidth efficiency. Moreover, compared to the existing TDM schemes, our scheme can not only satisfy the latency requirement of 5G but also significantly reduce the multiplexed signal bandwidth, hence providing higher signal fidelity in the bandwidth-limited fronthaul system. The experimental result of EVM verifies that 256-QAM is supportable using the segment-wise TDM arraying with only 250-ns latency, while with the ordinary TDM arraying, only 64-QAM is bearable.

  5. Simple fibre based dispersion management for two-photon excited fluorescence imaging through an endoscope

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Konstantinos; Marti, Dominik; Andersen, Peter E.

    2018-02-01

    We want to implement two-photon excitation fluorescence microscopy (TPEFM) into endoscopes, since TPEFM can provide relevant biomarkers for cancer staging and grading in hollow organs, endoscopically accessible through natural orifices. However, many obstacles must be overcome, among others the delivery of short laser pulses to the distal end of the endoscope. To this avail, we present imaging results using an all-fibre dispersion management scheme in a TPEFM setup. The scheme has been conceived by Jespersen et al. in 20101 and relies on the combination of a single mode fibre with normal and a higher order mode fibre with anomalous dispersion properties, fused in series using a long period grating. We show that using this fibre assembly, a simple and robust pulsed laser delivery system without any free-space optics, which is thus suitable for clinical use, can be realised.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Liang; Abild-Pedersen, Frank

    On the basis of an extensive set of density functional theory calculations, it is shown that a simple scheme provides a fundamental understanding of variations in the transition state energies and structures of reaction intermediates on transition metal surfaces across the periodic table. The scheme is built on the bond order conservation principle and requires a limited set of input data, still achieving transition state energies as a function of simple descriptors with an error smaller than those of approaches based on linear fits to a set of calculated transition state energies. Here, we have applied this approach together withmore » linear scaling of adsorption energies to obtain the energetics of the NH 3 decomposition reaction on a series of stepped fcc(211) transition metal surfaces. Moreover, this information is used to establish a microkinetic model for the formation of N 2 and H 2, thus providing insight into the components of the reaction that determines the activity.« less

  7. A simple quantum mechanical treatment of scattering in nanoscale transistors

    NASA Astrophysics Data System (ADS)

    Venugopal, R.; Paulsson, M.; Goasguen, S.; Datta, S.; Lundstrom, M. S.

    2003-05-01

    We present a computationally efficient, two-dimensional quantum mechanical simulation scheme for modeling dissipative electron transport in thin body, fully depleted, n-channel, silicon-on-insulator transistors. The simulation scheme, which solves the nonequilibrium Green's function equations self consistently with Poisson's equation, treats the effect of scattering using a simple approximation inspired by the "Büttiker probes," often used in mesoscopic physics. It is based on an expansion of the active device Hamiltonian in decoupled mode space. Simulation results are used to highlight quantum effects, discuss the physics of scattering and to relate the quantum mechanical quantities used in our model to experimentally measured low field mobilities. Additionally, quantum boundary conditions are rigorously derived and the effects of strong off-equilibrium transport are examined. This paper shows that our approximate treatment of scattering, is an efficient and useful simulation method for modeling electron transport in nanoscale, silicon-on-insulator transistors.

  8. Graphical tensor product reduction scheme for the Lie algebras so(5) = sp(2) , su(3) , and g(2)

    NASA Astrophysics Data System (ADS)

    Vlasii, N. D.; von Rütte, F.; Wiese, U.-J.

    2016-08-01

    We develop in detail a graphical tensor product reduction scheme, first described by Antoine and Speiser, for the simple rank 2 Lie algebras so(5) = sp(2) , su(3) , and g(2) . This leads to an efficient practical method to reduce tensor products of irreducible representations into sums of such representations. For this purpose, the 2-dimensional weight diagram of a given representation is placed in a ;landscape; of irreducible representations. We provide both the landscapes and the weight diagrams for a large number of representations for the three simple rank 2 Lie algebras. We also apply the algebraic ;girdle; method, which is much less efficient for calculations by hand for moderately large representations. Computer code for reducing tensor products, based on the graphical method, has been developed as well and is available from the authors upon request.

  9. Simple scheme for encoding and decoding a qubit in unknown state for various topological codes

    PubMed Central

    Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał

    2015-01-01

    We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905

  10. Development of a discrete gas-kinetic scheme for simulation of two-dimensional viscous incompressible and compressible flows.

    PubMed

    Yang, L M; Shu, C; Wang, Y

    2016-03-01

    In this work, a discrete gas-kinetic scheme (DGKS) is presented for simulation of two-dimensional viscous incompressible and compressible flows. This scheme is developed from the circular function-based GKS, which was recently proposed by Shu and his co-workers [L. M. Yang, C. Shu, and J. Wu, J. Comput. Phys. 274, 611 (2014)]. For the circular function-based GKS, the integrals for conservation forms of moments in the infinity domain for the Maxwellian function-based GKS are simplified to those integrals along the circle. As a result, the explicit formulations of conservative variables and fluxes are derived. However, these explicit formulations of circular function-based GKS for viscous flows are still complicated, which may not be easy for the application by new users. By using certain discrete points to represent the circle in the phase velocity space, the complicated formulations can be replaced by a simple solution process. The basic requirement is that the conservation forms of moments for the circular function-based GKS can be accurately satisfied by weighted summation of distribution functions at discrete points. In this work, it is shown that integral quadrature by four discrete points on the circle, which forms the D2Q4 discrete velocity model, can exactly match the integrals. Numerical results showed that the present scheme can provide accurate numerical results for incompressible and compressible viscous flows with roughly the same computational cost as that needed by the Roe scheme.

  11. Rough Set Based Splitting Criterion for Binary Decision Tree Classifiers

    DTIC Science & Technology

    2006-09-26

    Alata O. Fernandez-Maloigne C., and Ferrie J.C. (2001). Unsupervised Algorithm for the Segmentation of Three-Dimensional Magnetic Resonance Brain ...instinctual and learned responses in the brain , causing it to make decisions based on patterns in the stimuli. Using this deceptively simple process...2001. [2] Bohn C. (1997). An Incremental Unsupervised Learning Scheme for Function Approximation. In: Proceedings of the 1997 IEEE International

  12. Evaluation of the Plant-Craig stochastic convection scheme (v2.0) in the ensemble forecasting system MOGREPS-R (24 km) based on the Unified Model (v7.3)

    NASA Astrophysics Data System (ADS)

    Keane, Richard J.; Plant, Robert S.; Tennant, Warren J.

    2016-05-01

    The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.

  13. Optimization of Crew Shielding Requirement in Reactor-Powered Lunar Surface Missions

    NASA Technical Reports Server (NTRS)

    Barghouty, Abdulnasser F.

    2007-01-01

    On the surface of the moon -and not only during heightened solar activities- the radiation environment As such that crew protection will be required for missions lasting in excess of six months. This study focuses on estimating the optimized crew shielding requirement for lunar surface missions with a nuclear option. Simple, transport-simulation based dose-depth relations of the three (galactic, solar, and fission) radiation sources am employed in a 1-dimensional optimization scheme. The scheme is developed to estimate the total required mass of lunar-regolith separating reactor from crew. The scheme was applied to both solar maximum and minimum conditions. It is shown that savings of up to 30% in regolith mass can be realized. It is argued, however, that inherent variation and uncertainty -mainly in lunar regolith attenuation properties in addition to the radiation quality factor- can easily defeat this and similar optimization schemes.

  14. Optimization of Crew Shielding Requirement in Reactor-Powered Lunar Surface Missions

    NASA Technical Reports Server (NTRS)

    Barghouty, A. F.

    2007-01-01

    On the surface of the moon and not only during heightened solar activities the radiation environment is such that crew protection will be required for missions lasting in excess of six months. This study focuses on estimating the optimized crew shielding requirement for lunar surface missions with a nuclear option. Simple, transport-simulation based dose-depth relations of the three radiation sources (galactic, solar, and fission) are employed in a one-dimensional optimization scheme. The scheme is developed to estimate the total required mass of lunar regolith separating reactor from crew. The scheme was applied to both solar maximum and minimum conditions. It is shown that savings of up to 30% in regolith mass can be realized. It is argued, however, that inherent variation and uncertainty mainly in lunar regolith attenuation properties in addition to the radiation quality factor can easily defeat this and similar optimization schemes.

  15. Direct adaptive control of manipulators in Cartesian space

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A new adaptive-control scheme for direct control of manipulator end effector to achieve trajectory tracking in Cartesian space is developed in this article. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of adaptive feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for on-line implementation with high sampling rates. The control scheme is applied to a two-link manipulator for illustration.

  16. Valence bond and enzyme catalysis: a time to break down and a time to build up.

    PubMed

    Sharir-Ivry, Avital; Varatharaj, Rajapandian; Shurki, Avital

    2015-05-04

    Understanding enzyme catalysis and developing ability to control of it are two great challenges in biochemistry. A few successful examples of computational-based enzyme design have proved the fantastic potential of computational approaches in this field, however, relatively modest rate enhancements have been reported and the further development of complementary methods is still required. Herein we propose a conceptually simple scheme to identify the specific role that each residue plays in catalysis. The scheme is based on a breakdown of the total catalytic effect into contributions of individual protein residues, which are further decomposed into chemically interpretable components by using valence bond theory. The scheme is shown to shed light on the origin of catalysis in wild-type haloalkane dehalogenase (wt-DhlA) and its mutants. Furthermore, the understanding gained through our scheme is shown to have great potential in facilitating the selection of non-optimal sites for catalysis and suggesting effective mutations to enhance the enzymatic rate. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Self-consistent Green's function embedding for advanced electronic structure methods based on a dynamical mean-field concept

    NASA Astrophysics Data System (ADS)

    Chibani, Wael; Ren, Xinguo; Scheffler, Matthias; Rinke, Patrick

    2016-04-01

    We present an embedding scheme for periodic systems that facilitates the treatment of the physically important part (here a unit cell or a supercell) with advanced electronic structure methods, that are computationally too expensive for periodic systems. The rest of the periodic system is treated with computationally less demanding approaches, e.g., Kohn-Sham density-functional theory, in a self-consistent manner. Our scheme is based on the concept of dynamical mean-field theory formulated in terms of Green's functions. Our real-space dynamical mean-field embedding scheme features two nested Dyson equations, one for the embedded cluster and another for the periodic surrounding. The total energy is computed from the resulting Green's functions. The performance of our scheme is demonstrated by treating the embedded region with hybrid functionals and many-body perturbation theory in the GW approach for simple bulk systems. The total energy and the density of states converge rapidly with respect to the computational parameters and approach their bulk limit with increasing cluster (i.e., computational supercell) size.

  18. The computation of lipophilicities of ⁶⁴Cu PET systems based on a novel approach for fluctuating charges.

    PubMed

    Comba, Peter; Martin, Bodo; Sanyal, Avik; Stephan, Holger

    2013-08-21

    A QSPR scheme for the computation of lipophilicities of ⁶⁴Cu complexes was developed with a training set of 24 tetraazamacrocylic and bispidine-based Cu(II) compounds and their experimentally available 1-octanol-water distribution coefficients. A minimum number of physically meaningful parameters were used in the scheme, and these are primarily based on data available from molecular mechanics calculations, using an established force field for Cu(II) complexes and a recently developed scheme for the calculation of fluctuating atomic charges. The developed model was also applied to an independent validation set and was found to accurately predict distribution coefficients of potential ⁶⁴Cu PET (positron emission tomography) systems. A possible next step would be the development of a QSAR-based biodistribution model to track the uptake of imaging agents in different organs and tissues of the body. It is expected that such simple, empirical models of lipophilicity and biodistribution will be very useful in the design and virtual screening of positron emission tomography (PET) imaging agents.

  19. Backward Channel Protection Based on Randomized Tree-Walking Algorithm and Its Analysis for Securing RFID Tag Information and Privacy

    NASA Astrophysics Data System (ADS)

    Choi, Wonjoon; Yoon, Myungchul; Roh, Byeong-Hee

    Eavesdropping on backward channels in RFID environments may cause severe privacy problems because it means the exposure of personal information related to tags that each person has. However, most existing RFID tag security schemes are focused on the forward channel protections. In this paper, we propose a simple but effective method to solve the backward channel eavesdropping problem based on Randomized-tree walking algorithm for securing tag ID information and privacy in RFID-based applications. In order to show the efficiency of the proposed scheme, we derive two performance models for the cases when CRC is used and not used. It is shown that the proposed method can lower the probability of eavesdropping on backward channels near to ‘0.’

  20. Aeroelastic analysis of a troposkien-type wind turbine blade

    NASA Technical Reports Server (NTRS)

    Nitzsche, F.

    1981-01-01

    The linear aeroelastic equations for one curved blade of a vertical axis wind turbine in state vector form are presented. The method is based on a simple integrating matrix scheme together with the transfer matrix idea. The method is proposed as a convenient way of solving the associated eigenvalue problem for general support conditions.

  1. A simple quasi-diabatization scheme suitable for spectroscopic problems based on one-electron properties of interacting states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cave, Robert J., E-mail: Robert-Cave@hmc.edu; Stanton, John F., E-mail: JFStanton@gmail.com

    We present a simple quasi-diabatization scheme applicable to spectroscopic studies that can be applied using any wavefunction for which one-electron properties and transition properties can be calculated. The method is based on rotation of a pair (or set) of adiabatic states to minimize the difference between the given transition property at a reference geometry of high symmetry (where the quasi-diabatic states and adiabatic states coincide) and points of lower symmetry where quasi-diabatic quantities are desired. Compared to other quasi-diabatization techniques, the method requires no special coding, facilitates direct comparison between quasi-diabatic quantities calculated using different types of wavefunctions, and ismore » free of any selection of configurations in the definition of the quasi-diabatic states. On the other hand, the method appears to be sensitive to multi-state issues, unlike recent methods we have developed that use a configurational definition of quasi-diabatic states. Results are presented and compared with two other recently developed quasi-diabatization techniques.« less

  2. A fast numerical scheme for causal relativistic hydrodynamics with dissipation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takamoto, Makoto, E-mail: takamoto@tap.scphys.kyoto-u.ac.jp; Inutsuka, Shu-ichiro

    2011-08-01

    Highlights: {yields} We have developed a new multi-dimensional numerical scheme for causal relativistic hydrodynamics with dissipation. {yields} Our new scheme can calculate the evolution of dissipative relativistic hydrodynamics faster and more effectively than existing schemes. {yields} Since we use the Riemann solver for solving the advection steps, our method can capture shocks very accurately. - Abstract: In this paper, we develop a stable and fast numerical scheme for relativistic dissipative hydrodynamics based on Israel-Stewart theory. Israel-Stewart theory is a stable and causal description of dissipation in relativistic hydrodynamics although it includes relaxation process with the timescale for collision of constituentmore » particles, which introduces stiff equations and makes practical numerical calculation difficult. In our new scheme, we use Strang's splitting method, and use the piecewise exact solutions for solving the extremely short timescale problem. In addition, since we split the calculations into inviscid step and dissipative step, Riemann solver can be used for obtaining numerical flux for the inviscid step. The use of Riemann solver enables us to capture shocks very accurately. Simple numerical examples are shown. The present scheme can be applied to various high energy phenomena of astrophysics and nuclear physics.« less

  3. Receptors as a master key for synchronization of rhythms

    NASA Astrophysics Data System (ADS)

    Nagano, Seido

    2004-03-01

    A simple, but general scheme to achieve synchronization of rhythms was derived. The scheme has been inductively generalized from the modelling study of cellular slime mold. It was clarified that biological receptors work as apparatuses that can convert external stimulus to the form of nonlinear interaction within individual oscillators. Namely, the mathematical model receptor works as a nonlinear coupling apparatus between nonlinear oscillators. Thus, synchronization is achieved as a result of competition between two kinds of non-linearities, and to achieve synchronization, even a small external stimulation via model receptors can change the characteristics of individual oscillators significantly. The derived scheme is very simple mathematically, but it is a very powerful scheme as numerically demonstrated. The biological receptor scheme should significantly help understanding of synchronization phenomena in biology since groups of limit cycle oscillators and receptors are ubiquitous in biological systems. Reference: S. Nagano, Phys Rev. E67, 056215(2003)

  4. On some Approximation Schemes for Steady Compressible Viscous Flow

    NASA Astrophysics Data System (ADS)

    Bause, M.; Heywood, J. G.; Novotny, A.; Padula, M.

    This paper continues our development of approximation schemes for steady compressible viscous flow based on an iteration between a Stokes like problem for the velocity and a transport equation for the density, with the aim of improving their suitability for computations. Such schemes seem attractive for computations because they offer a reduction to standard problems for which there is already highly refined software, and because of the guidance that can be drawn from an existence theory based on them. Our objective here is to modify a recent scheme of Heywood and Padula [12], to improve its convergence properties. This scheme improved upon an earlier scheme of Padula [21], [23] through the use of a special ``effective pressure'' in linking the Stokes and transport problems. However, its convergence is limited for several reasons. Firstly, the steady transport equation itself is only solvable for general velocity fields if they satisfy certain smallness conditions. These conditions are met here by using a rescaled variant of the steady transport equation based on a pseudo time step for the equation of continuity. Another matter limiting the convergence of the scheme in [12] is that the Stokes linearization, which is a linearization about zero, has an inevitably small range of convergence. We replace it here with an Oseen or Newton linearization, either of which has a wider range of convergence, and converges more rapidly. The simplicity of the scheme offered in [12] was conducive to a relatively simple and clearly organized proof of its convergence. The proofs of convergence for the more complicated schemes proposed here are structured along the same lines. They strengthen the theorems of existence and uniqueness in [12] by weakening the smallness conditions that are needed. The expected improvement in the computational performance of the modified schemes has been confirmed by Bause [2], in an ongoing investigation.

  5. A shock-capturing SPH scheme based on adaptive kernel estimation

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime

    2006-02-01

    Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.

  6. MPDATA: A positive definite solver for geophysical flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smolarkiewicz, P.K.; Margolin, L.G.

    1997-12-31

    This paper is a review of MPDATA, a class of methods for the numerical simulation of advection based on the sign-preserving properties of upstream differencing. MPDATA was designed originally as an inexpensive alternative to flux-limited schemes for evaluating the transport of nonnegative thermodynamic variables (such as liquid water or water vapor) in atmospheric models. During the last decade, MPDATA has evolved from a simple advection scheme to a general approach for integrating the conservation laws of geophysical fluids on micro-to-planetary scales. The purpose of this paper is to summarize the basic concepts leading to a family of MPDATA schemes, reviewmore » the existing MPDATA options, as well as to demonstrate the efficacy of the approach using diverse examples of complex geophysical flows.« less

  7. Performance analysis of cross-seeding WDM-PON system using transfer matrix method

    NASA Astrophysics Data System (ADS)

    Simatupang, Joni Welman; Pukhrambam, Puspa Devi; Huang, Yen-Ru

    2016-12-01

    In this paper, a model based on the transfer matrix method is adopted to analyze the effects of Rayleigh backscattering and Fresnel multiple reflections on a cross-seeding WDM-PON system. As part of analytical approximation methods, this time-independent model is quite simple but very efficient when it is applied to various WDM-PON transmission systems, including the cross-seeding scheme. The cross seeding scheme is most beneficial for systems with low loop-back ONU gain or low reflection loss at the drop fiber for upstream data in bidirectional transmission. However for downstream data transmission, multiple reflections power could destroy the usefulness of the cross-seeding scheme when the reflectivity is high enough and the RN is positioned near OLT or close to ONU.

  8. Simulating Progressive Damage of Notched Composite Laminates with Various Lamination Schemes

    NASA Astrophysics Data System (ADS)

    Mandal, B.; Chakrabarti, A.

    2017-05-01

    A three dimensional finite element based progressive damage model has been developed for the failure analysis of notched composite laminates. The material constitutive relations and the progressive damage algorithms are implemented into finite element code ABAQUS using user-defined subroutine UMAT. The existing failure criteria for the composite laminates are modified by including the failure criteria for fiber/matrix shear damage and delamination effects. The proposed numerical model is quite efficient and simple compared to other progressive damage models available in the literature. The efficiency of the present constitutive model and the computational scheme is verified by comparing the simulated results with the results available in the literature. A parametric study has been carried out to investigate the effect of change in lamination scheme on the failure behaviour of notched composite laminates.

  9. Efficient O(N) integration for all-electron electronic structure calculation using numeric basis functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Havu, V.; Fritz Haber Institute of the Max Planck Society, Berlin; Blum, V.

    2009-12-01

    We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as themore » more rigorous bottom-up approaches.« less

  10. A simple, physically-based method for evaluating the economic costs of geo-engineering schemes

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.

    2009-04-01

    The consumption of primary energy (e.g coal, oil, uranium) by the global economy is done in expectation of a return on investment. For geo-engineering schemes, however, the relationship between the primary energy consumption required and the economic return is, at first glance, quite different. The energy costs of a given scheme represent a removal of economically productive available energy to do work in the normal global economy. What are the economic implications of the energy consumption associated with geo-engineering techniques? I will present a simple thermodynamic argument that, in general, real (inflation-adjusted) economic value has a fixed relationship to the rate of global primary energy consumption. This hypothesis will be shown to be supported by 36 years of available energy statistics and a two millennia period of statistics for global economic production. What is found from this analysis is that the value in any given inflation-adjusted 1990 dollar is sustained by a constant 9.7 +/- 0.3 milliwatts of global primary energy consumption. Thus, insofar as geo-engineering is concerned, any scheme that requires some nominal fraction of continuous global primary energy output necessitates a corresponding inflationary loss of real global economic value. For example, if 1% of global energy output is required, at today's consumption rates of 15 TW this corresponds to an inflationary loss of 15 trillion 1990 dollars of real value. The loss will be less, however, if the geo-engineering scheme also enables a demonstrable enhancement to global economic production capacity through climate modification.

  11. An online outlier identification and removal scheme for improving fault detection performance.

    PubMed

    Ferdowsi, Hasan; Jagannathan, Sarangapani; Zawodniok, Maciej

    2014-05-01

    Measured data or states for a nonlinear dynamic system is usually contaminated by outliers. Identifying and removing outliers will make the data (or system states) more trustworthy and reliable since outliers in the measured data (or states) can cause missed or false alarms during fault diagnosis. In addition, faults can make the system states nonstationary needing a novel analytical model-based fault detection (FD) framework. In this paper, an online outlier identification and removal (OIR) scheme is proposed for a nonlinear dynamic system. Since the dynamics of the system can experience unknown changes due to faults, traditional observer-based techniques cannot be used to remove the outliers. The OIR scheme uses a neural network (NN) to estimate the actual system states from measured system states involving outliers. With this method, the outlier detection is performed online at each time instant by finding the difference between the estimated and the measured states and comparing its median with its standard deviation over a moving time window. The NN weight update law in OIR is designed such that the detected outliers will have no effect on the state estimation, which is subsequently used for model-based fault diagnosis. In addition, since the OIR estimator cannot distinguish between the faulty or healthy operating conditions, a separate model-based observer is designed for fault diagnosis, which uses the OIR scheme as a preprocessing unit to improve the FD performance. The stability analysis of both OIR and fault diagnosis schemes are introduced. Finally, a three-tank benchmarking system and a simple linear system are used to verify the proposed scheme in simulations, and then the scheme is applied on an axial piston pump testbed. The scheme can be applied to nonlinear systems whose dynamics and underlying distribution of states are subjected to change due to both unknown faults and operating conditions.

  12. A simple language to script and simulate breeding schemes: the breeding scheme language

    USDA-ARS?s Scientific Manuscript database

    It is difficult for plant breeders to determine an optimal breeding strategy given that the problem involves many factors, such as target trait genetic architecture and breeding resource availability. There are many possible breeding schemes for each breeding program. Although simulation study may b...

  13. Analysis of Calibration Errors for Both Short and Long Stroke White Light Experiments

    NASA Technical Reports Server (NTRS)

    Pan, Xaiopei

    2006-01-01

    This work will analyze focusing and tilt variations introduced by thermal changes in calibration processes. In particular the accuracy limits are presented for common short- and long-stroke experiments. A new, simple, practical calibration scheme is proposed and analyzed based on the SIM PlanetQuest's Micro-Arcsecond Metrology (MAM) testbed experiments.

  14. Strategic Planning for Smart Leadership: Rethinking Your Organization's Collective Future through a Workbook-Based, Three-Level Model.

    ERIC Educational Resources Information Center

    Austin, William J.

    This book is a simple, user-friendly, and practical guide to strategic planning. Chapter 1 gives an introduction to and overview of strategic planning. Chapters 2 through 4 review strategic-planning theory, the current nature of planning theory, its emergence as organizational practice, organizational structure schemes, and the limitations of…

  15. Aerothermal modeling program, phase 1

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.; Reynolds, R.; Ball, I.; Berry, R.; Johnson, K.; Mongia, H.

    1983-01-01

    Aerothermal submodels used in analytical combustor models are analyzed. The models described include turbulence and scalar transport, gaseous full combustion, spray evaporation/combustion, soot formation and oxidation, and radiation. The computational scheme is discussed in relation to boundary conditions and convergence criteria. Also presented is the data base for benchmark quality test cases and an analysis of simple flows.

  16. Efficient quantum pseudorandomness with simple graph states

    NASA Astrophysics Data System (ADS)

    Mezher, Rawad; Ghalbouni, Joe; Dgheim, Joseph; Markham, Damian

    2018-02-01

    Measurement based (MB) quantum computation allows for universal quantum computing by measuring individual qubits prepared in entangled multipartite states, known as graph states. Unless corrected for, the randomness of the measurements leads to the generation of ensembles of random unitaries, where each random unitary is identified with a string of possible measurement results. We show that repeating an MB scheme an efficient number of times, on a simple graph state, with measurements at fixed angles and no feedforward corrections, produces a random unitary ensemble that is an ɛ -approximate t design on n qubits. Unlike previous constructions, the graph is regular and is also a universal resource for measurement based quantum computing, closely related to the brickwork state.

  17. Proxy-SU(3) symmetry in heavy deformed nuclei

    NASA Astrophysics Data System (ADS)

    Bonatsos, Dennis; Assimakis, I. E.; Minkov, N.; Martinou, Andriana; Cakirli, R. B.; Casten, R. F.; Blaum, K.

    2017-06-01

    Background: Microscopic calculations of heavy nuclei face considerable difficulties due to the sizes of the matrices that need to be solved. Various approximation schemes have been invoked, for example by truncating the spaces, imposing seniority limits, or appealing to various symmetry schemes such as pseudo-SU(3). This paper proposes a new symmetry scheme also based on SU(3). This proxy-SU(3) can be applied to well-deformed nuclei, is simple to use, and can yield analytic predictions. Purpose: To present the new scheme and its microscopic motivation, and to test it using a Nilsson model calculation with the original shell model orbits and with the new proxy set. Method: We invoke an approximate, analytic, treatment of the Nilsson model, that allows the above vetting and yet is also transparent in understanding the approximations involved in the new proxy-SU(3). Results: It is found that the new scheme yields a Nilsson diagram for well-deformed nuclei that is very close to the original Nilsson diagram. The specific levels of approximation in the new scheme are also shown, for each major shell. Conclusions: The new proxy-SU(3) scheme is a good approximation to the full set of orbits in a major shell. Being able to replace a complex shell model calculation with a symmetry-based description now opens up the possibility to predict many properties of nuclei analytically and often in a parameter-free way. The new scheme works best for heavier nuclei, precisely where full microscopic calculations are most challenged. Some cases in which the new scheme can be used, often analytically, to make specific predictions, are shown in a subsequent paper.

  18. Active identification and control of aerodynamic instabilities in axial and centrifugal compressors

    NASA Astrophysics Data System (ADS)

    Krichene, Assad

    In this thesis, it is experimentally shown that dynamic cursors to stall and surge exist in both axial and centrifugal compressors using the experimental axial and centrifugal compressor rigs located in the School of Aerospace Engineering at the Georgia Institute of Technology. Further, it is shown that the dynamic cursors to stall and surge can be identified in real-time and they can be used in a simple control scheme to avoid the occurrence of stall and surge instabilities altogether. For the centrifugal compressor, a previously developed real-time observer is used in order to detect dynamic cursors to surge in real-time. An off-line analysis using the Fast Fourier Transform (FFT) of the open loop experimental data from the centrifugal compressor rig is carried out to establish the influence of compressor speed on the dynamic cursor frequency. The variation of the amplitude of dynamic cursors with compressor operating condition from experimental data is qualitatively compared with simulation results obtained using a generic compression system model subjected to white noise excitation. Using off-line analysis results, a simple control scheme based on fuzzy logic is synthesized for surge avoidance and recovery. The control scheme is implemented in the centrifugal compressor rig using compressor bleed as well as fuel flow to the combustor. Closed loop experimental results are obtained to demonstrate the effectiveness of the controller for both surge avoidance and surge recovery. The existence of stall cursors in an axial compression system is established using the observer scheme from off-line analysis of an existing database of a commercial gas turbine engine. However, the observer scheme is found to be ineffective in detecting stall cursors in the experimental axial compressor rig in the School of Aerospace Engineering at the Georgia Institute of Technology. An alternate scheme based on the amplitude of pressure data content at the blade passage frequency obtained using a pressure sensor located (in the casing) over the blade row is developed and used in the axial compressor rig for stall and surge avoidance and recovery. (Abstract shortened by UMI.)

  19. 10-Gbps optical duobinary signal generated by bandwidth-limited reflective semiconductor optical amplifier in colorless optical network units and compensated by fiber Bragg grating-based equalizer in optical line terminal

    NASA Astrophysics Data System (ADS)

    Fu, Meixia; Zhang, Min; Wang, Danshi; Cui, Yue; Han, Huanhuan

    2016-10-01

    We propose a scheme of optical duobinary-modulated upstream transmission system for reflective semiconductor optical amplifier-based colorless optical network units in 10-Gbps wavelength-division multiplexed passive optical network (WDM-PON), where a fiber Bragg grating (FBG) is adopted as an optical equalizer for better performance. The demodulation module is extremely simple, only needing a binary intensity modulation direct detection receiver. A better received sensitivity of -16.98 dBm at bit rate error (BER)=1.0×10-4 can be achieved at 120 km without FBG, and the BER at the sensitivity of -18.49 dBm can be up to 2.1×10-5 at the transmission distance of 160 km with FBG, which demonstrates the feasibility of our proposed scheme. Moreover, it could be a high cost-effectiveness scheme for WDM-PON in the future.

  20. Measurement and compensation schemes for the pulse front distortion of ultra-intensity ultra-short laser pulses

    NASA Astrophysics Data System (ADS)

    Wu, Fenxiang; Xu, Yi; Yu, Linpeng; Yang, Xiaojun; Li, Wenkai; Lu, Jun; Leng, Yuxin

    2016-11-01

    Pulse front distortion (PFD) is mainly induced by the chromatic aberration in femtosecond high-peak power laser systems, and it can temporally distort the pulse in the focus and therefore decrease the peak intensity. A novel measurement scheme is proposed to directly measure the PFD of ultra-intensity ultra-short laser pulses, which can work not only without any extra struggle for the desired reference pulse, but also largely reduce the size of the required optical elements in measurement. The measured PFD in an experimental 200TW/27fs laser system is in good agreement with the calculated result, which demonstrates the validity and feasibility of this method effectively. In addition, a simple compensation scheme based on the combination of concave lens and parabolic lens is also designed and proposed to correct the PFD. Based on the theoretical calculation, the PFD of above experimental laser system can almost be completely corrected by using this compensator with proper parameters.

  1. A New Scheme for the Design of Hilbert Transform Pairs of Biorthogonal Wavelet Bases

    NASA Astrophysics Data System (ADS)

    Shi, Hongli; Luo, Shuqian

    2010-12-01

    In designing the Hilbert transform pairs of biorthogonal wavelet bases, it has been shown that the requirements of the equal-magnitude responses and the half-sample phase offset on the lowpass filters are the necessary and sufficient condition. In this paper, the relationship between the phase offset and the vanishing moment difference of biorthogonal scaling filters is derived, which implies a simple way to choose the vanishing moments so that the phase response requirement can be satisfied structurally. The magnitude response requirement is approximately achieved by a constrained optimization procedure, where the objective function and constraints are all expressed in terms of the auxiliary filters of scaling filters rather than the scaling filters directly. Generally, the calculation burden in the design implementation will be less than that of the current schemes. The integral of magnitude response difference between the primal and dual scaling filters has been chosen as the objective function, which expresses the magnitude response requirements in the whole frequency range. Two design examples illustrate that the biorthogonal wavelet bases designed by the proposed scheme are very close to Hilbert transform pairs.

  2. Comparative analysis of quantum cascade laser modeling based on density matrices and non-equilibrium Green's functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindskog, M., E-mail: martin.lindskog@teorfys.lu.se; Wacker, A.; Wolf, J. M.

    2014-09-08

    We study the operation of an 8.5 μm quantum cascade laser based on GaInAs/AlInAs lattice matched to InP using three different simulation models based on density matrix (DM) and non-equilibrium Green's function (NEGF) formulations. The latter advanced scheme serves as a validation for the simpler DM schemes and, at the same time, provides additional insight, such as the temperatures of the sub-band carrier distributions. We find that for the particular quantum cascade laser studied here, the behavior is well described by simple quantum mechanical estimates based on Fermi's golden rule. As a consequence, the DM model, which includes second order currents,more » agrees well with the NEGF results. Both these simulations are in accordance with previously reported data and a second regrown device.« less

  3. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    NASA Technical Reports Server (NTRS)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  4. Simple many-body based screening mixing ansatz for improvement of G W /Bethe-Salpeter equation excitation energies of molecular systems

    NASA Astrophysics Data System (ADS)

    Ziaei, Vafa; Bredow, Thomas

    2017-11-01

    We propose a simple many-body based screening mixing strategy to considerably enhance the performance of the Bethe-Salpeter equation (BSE) approach for prediction of excitation energies of molecular systems. This strategy enables us to closely reproduce results of highly correlated equation of motion coupled cluster singles and doubles (EOM-CCSD) through optimal use of cancellation effects. We start from the Hartree-Fock (HF) reference state and take advantage of local density approximation (LDA) based random phase approximation (RPA) screening, denoted as W0-RPA@LDA with W0 as the dynamically screened interaction built upon LDA wave functions and energies. We further use this W0-RPA@LDA screening as an initial screening guess for calculation of quasiparticle energies in the framework of G0W0 @HF. The W0-RPA@LDA screening is further injected into the BSE. By applying such an approach on a set of 22 molecules for which the traditional G W /BSE approaches fail, we observe good agreement with respect to EOM-CCSD references. The reason for the observed good accuracy of this mixing ansatz (scheme A) lies in an optimal damping of HF exchange effect through the W0-RPA@LDA strong screening, leading to substantial decrease of typically overestimated HF electronic gap, and hence to better excitation energies. Further, we present a second multiscreening ansatz (scheme B), which is similar to scheme A with the exception that now the W0-RPA@HF screening is used in the BSE in order to further improve the overestimated excitation energies of carbonyl sulfide (COS) and disilane (Si2H6 ). The reason for improvement of the excitation energies in scheme B lies in the fact that W0-RPA@HF screening is less effective (and weaker than W0-RPA@LDA), which gives rise to stronger electron-hole effects in the BSE.

  5. A simplified filterless photonic frequency octupling scheme based on cascaded modulators

    NASA Astrophysics Data System (ADS)

    Zhang, Wu; Wen, Aijun; Gao, Yongsheng; Zheng, Hanxiao; Chen, Wei; He, Hongye

    2017-04-01

    A simplified filterless frequency octupling scheme by connecting an intensity modulator (IM) with a dual-parallel Mach-Zehnder (DPMZM) in series is proposed in this paper. The LO signal is distributed into two parts, and one part is used to drive the IM and the other part is applied to drive the DPMZM's upper sub-modulator, both at the peak point. The lower sub-modulator is only driven by dc bias, and the parent modulator works at null point. By properly adjusting dc bias of the lower sub-modulator, only ±4th-order optical sidebands dominate at the output of the DPMZM. The approach is verified by experiments, and 32-GHz and 40-GHz millimetre waves (mm-waves) are generated using 4-GHz and 5-GHz LO signals, respectively. We acquire a 15-dB electrical spurious suppression ratio (ESSR) and a relatively good phase noise of the signal. Compared with other schemes, the scheme is simple in configuration because only an IM and a DPMZM are needed. What's more, the scheme is tunable in frequency as no filter is used.

  6. Acetylene-based pathways for prebiotic evolution on Titan

    NASA Astrophysics Data System (ADS)

    Abbas, O.; Schulze-Makuch, D.

    2002-11-01

    Due to Titan's reducing atmosphere and lack of an ozone shield, ionizing radiation penetrates the atmosphere creating ions, radicals and electrons that are highly reactive producing versatile chemical species on Titan's surface. We propose that the catalytic hydrogenation of photochemically produced acetylene may be used as simple metabolic pathway by organisms at or near Titan's surface. While the acetylene may undergo this reaction, it can also undertake several other multi-step synthetic schemes that eventually lead to the production of amino acids or other biologically important molecules. Four model synthetic schemes will be described, and their relevance in relation to prebiotic evolution on Earth is discussed.

  7. A Maple package for computing Gröbner bases for linear recurrence relations

    NASA Astrophysics Data System (ADS)

    Gerdt, Vladimir P.; Robertz, Daniel

    2006-04-01

    A Maple package for computing Gröbner bases of linear difference ideals is described. The underlying algorithm is based on Janet and Janet-like monomial divisions associated with finite difference operators. The package can be used, for example, for automatic generation of difference schemes for linear partial differential equations and for reduction of multiloop Feynman integrals. These two possible applications are illustrated by simple examples of the Laplace equation and a one-loop scalar integral of propagator type.

  8. Sampling procedures for throughfall monitoring: A simulation study

    NASA Astrophysics Data System (ADS)

    Zimmermann, Beate; Zimmermann, Alexander; Lark, Richard Murray; Elsenbeer, Helmut

    2010-01-01

    What is the most appropriate sampling scheme to estimate event-based average throughfall? A satisfactory answer to this seemingly simple question has yet to be found, a failure which we attribute to previous efforts' dependence on empirical studies. Here we try to answer this question by simulating stochastic throughfall fields based on parameters for statistical models of large monitoring data sets. We subsequently sampled these fields with different sampling designs and variable sample supports. We evaluated the performance of a particular sampling scheme with respect to the uncertainty of possible estimated means of throughfall volumes. Even for a relative error limit of 20%, an impractically large number of small, funnel-type collectors would be required to estimate mean throughfall, particularly for small events. While stratification of the target area is not superior to simple random sampling, cluster random sampling involves the risk of being less efficient. A larger sample support, e.g., the use of trough-type collectors, considerably reduces the necessary sample sizes and eliminates the sensitivity of the mean to outliers. Since the gain in time associated with the manual handling of troughs versus funnels depends on the local precipitation regime, the employment of automatically recording clusters of long troughs emerges as the most promising sampling scheme. Even so, a relative error of less than 5% appears out of reach for throughfall under heterogeneous canopies. We therefore suspect a considerable uncertainty of input parameters for interception models derived from measured throughfall, in particular, for those requiring data of small throughfall events.

  9. Decentralized adaptive control of manipulators - Theory, simulation, and experimentation

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1989-01-01

    The author presents a simple decentralized adaptive-control scheme for multijoint robot manipulators based on the independent joint control concept. The control objective is to achieve accurate tracking of desired joint trajectories. The proposed control scheme does not use the complex manipulator dynamic model, and each joint is controlled simply by a PID (proportional-integral-derivative) feedback controller and a position-velocity-acceleration feedforward controller, both with adjustable gains. Simulation results are given for a two-link direct-drive manipulator under adaptive independent joint control. The results illustrate trajectory tracking under coupled dynamics and varying payload. The proposed scheme is implemented on a MicroVAX II computer for motion control of the three major joints of a PUMA 560 arm. Experimental results are presented to demonstrate that trajectory tracking is achieved despite coupled nonlinear joint dynamics.

  10. Experimentally feasible security check for n-qubit quantum secret sharing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schauer, Stefan; Huber, Marcus; Hiesmayr, Beatrix C.

    In this article we present a general security strategy for quantum secret sharing (QSS) protocols based on the scheme presented by Hillery, Buzek, and Berthiaume (HBB) [Phys. Rev. A 59, 1829 (1999)]. We focus on a generalization of the HBB protocol to n communication parties thus including n-partite Greenberger-Horne-Zeilinger states. We show that the multipartite version of the HBB scheme is insecure in certain settings and impractical when going to large n. To provide security for such QSS schemes in general we use the framework presented by some of the authors [M. Huber, F. Mintert, A. Gabriel, B. C. Hiesmayr,more » Phys. Rev. Lett. 104, 210501 (2010)] to detect certain genuine n-partite entanglement between the communication parties. In particular, we present a simple inequality which tests the security.« less

  11. Approximating the linear quadratic optimal control law for hereditary systems with delays in the control

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1988-01-01

    The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary schemes. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.

  12. Simplified demultiplexing scheme for two PDM-IM/DD systems utilizing a single Stokes analyzer over 25-km SMF.

    PubMed

    Pan, Yan; Yan, Lianshan; Yi, Anlin; Jiang, Lin; Pan, Wei; Luo, Bin; Zou, Xihua

    2017-10-15

    We propose a four-linear state of polarization multiplexed intensity modulation and direct detection (IM/DD) scheme based on two orthogonal polarization division multiplexing (PDM) on-off keying systems. We also experimentally demonstrate a simple demultiplexing algorithm for this scheme by utilizing only a single Stokes analyzer. At the rate of 4×10  Gbit/s, the experimental results show that the power penalty of the proposed scheme is about 1.5 dB, compared to the single PDM-IM/DD for back-to-back (B2B) transmission. Compared to B2B, just about 1.7 dB power penalty is required after 25 km Corning LEAF optical fiber transmission. Meanwhile, the performance of the polarization tracking is evaluated, and the results show that the BER fluctuation is less than 0.5 dB with a polarization scrambling rate up to 708.75 deg/s.

  13. Best Hiding Capacity Scheme for Variable Length Messages Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bajaj, Ruchika; Bedi, Punam; Pal, S. K.

    Steganography is an art of hiding information in such a way that prevents the detection of hidden messages. Besides security of data, the quantity of data that can be hidden in a single cover medium, is also very important. We present a secure data hiding scheme with high embedding capacity for messages of variable length based on Particle Swarm Optimization. This technique gives the best pixel positions in the cover image, which can be used to hide the secret data. In the proposed scheme, k bits of the secret message are substituted into k least significant bits of the image pixel, where k varies from 1 to 4 depending on the message length. The proposed scheme is tested and results compared with simple LSB substitution, uniform 4-bit LSB hiding (with PSO) for the test images Nature, Baboon, Lena and Kitty. The experimental study confirms that the proposed method achieves high data hiding capacity and maintains imperceptibility and minimizes the distortion between the cover image and the obtained stego image.

  14. Bond Order Conservation Strategies in Catalysis Applied to the NH 3 Decomposition Reaction

    DOE PAGES

    Yu, Liang; Abild-Pedersen, Frank

    2016-12-14

    On the basis of an extensive set of density functional theory calculations, it is shown that a simple scheme provides a fundamental understanding of variations in the transition state energies and structures of reaction intermediates on transition metal surfaces across the periodic table. The scheme is built on the bond order conservation principle and requires a limited set of input data, still achieving transition state energies as a function of simple descriptors with an error smaller than those of approaches based on linear fits to a set of calculated transition state energies. Here, we have applied this approach together withmore » linear scaling of adsorption energies to obtain the energetics of the NH 3 decomposition reaction on a series of stepped fcc(211) transition metal surfaces. Moreover, this information is used to establish a microkinetic model for the formation of N 2 and H 2, thus providing insight into the components of the reaction that determines the activity.« less

  15. Hierarchy Bayesian model based services awareness of high-speed optical access networks

    NASA Astrophysics Data System (ADS)

    Bai, Hui-feng

    2018-03-01

    As the speed of optical access networks soars with ever increasing multiple services, the service-supporting ability of optical access networks suffers greatly from the shortage of service awareness. Aiming to solve this problem, a hierarchy Bayesian model based services awareness mechanism is proposed for high-speed optical access networks. This approach builds a so-called hierarchy Bayesian model, according to the structure of typical optical access networks. Moreover, the proposed scheme is able to conduct simple services awareness operation in each optical network unit (ONU) and to perform complex services awareness from the whole view of system in optical line terminal (OLT). Simulation results show that the proposed scheme is able to achieve better quality of services (QoS), in terms of packet loss rate and time delay.

  16. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  17. An acoustic-convective splitting-based approach for the Kapila two-phase flow model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eikelder, M.F.P. ten, E-mail: m.f.p.teneikelder@tudelft.nl; Eindhoven University of Technology, Department of Mathematics and Computer Science, P.O. Box 513, 5600 MB Eindhoven; Daude, F.

    In this paper we propose a new acoustic-convective splitting-based numerical scheme for the Kapila five-equation two-phase flow model. The splitting operator decouples the acoustic waves and convective waves. The resulting two submodels are alternately numerically solved to approximate the solution of the entire model. The Lagrangian form of the acoustic submodel is numerically solved using an HLLC-type Riemann solver whereas the convective part is approximated with an upwind scheme. The result is a simple method which allows for a general equation of state. Numerical computations are performed for standard two-phase shock tube problems. A comparison is made with a non-splittingmore » approach. The results are in good agreement with reference results and exact solutions.« less

  18. Optical threshold secret sharing scheme based on basic vector operations and coherence superposition

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen

    2015-04-01

    We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.

  19. Design and evaluation of sparse quantization index modulation watermarking schemes

    NASA Astrophysics Data System (ADS)

    Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter

    2008-08-01

    In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).

  20. An Equation of State for Polymethylpentene (TPX) including Multi-Shock Response

    NASA Astrophysics Data System (ADS)

    Aslam, Tariq; Gustavsen, Richard; Sanchez, Nathaniel; Bartram, Brian

    2011-06-01

    The equation of state (EOS) of polymethylpentene (TPX) is examined through both single shock Hugoniot data as well as more recent multi-shock compression and release experiments. Results from the recent multi-shock experiments on LANL's 2-stage gas gun will be presented. A simple conservative Lagrangian numerical scheme utilizing total-variation-diminishing interpolation and an approximate Riemann solver will be presented as well as the methodology of calibration. It is shown that a simple Mie-Gruneisen EOS based off a Keane fitting form for the isentrope can replicate both the single shock and multi-shock experiments.

  1. An equation of state for polymethylpentene (TPX) including multi-shock response

    NASA Astrophysics Data System (ADS)

    Aslam, Tariq D.; Gustavsen, Rick; Sanchez, Nathaniel; Bartram, Brian D.

    2012-03-01

    The equation of state (EOS) of polymethylpentene (TPX) is examined through both single shock Hugoniot data as well as more recent multi-shock compression and release experiments. Results from the recent multi-shock experiments on LANL's two-stage gas gun will be presented. A simple conservative Lagrangian numerical scheme utilizing total variation diminishing interpolation and an approximate Riemann solver will be presented as well as the methodology of calibration. It is shown that a simple Mie-Grüneisen EOS based on a Keane fitting form for the isentrope can replicate both the single shock and multi-shock experiments.

  2. Improvements to a PCR-based serogrouping scheme for Salmonella enterica from dairy farm samples

    USDA-ARS?s Scientific Manuscript database

    The PCR method described by Herrera-León, et al. (Research in Microbiology 158:122-127, 2007) has proved to be a simple and useful technique for characterizing isolates of Salmonella enterica enterica belonging to serogroups B, C1, C2, D1, and E1, groups which encompass a majority of the isolates fr...

  3. Benefits of a 4th Ice Class in the Simulated Radar Reflectivities of Convective Systems Using a Bulk Microphysics Scheme

    NASA Technical Reports Server (NTRS)

    Lang, Stephen E.; Tao, Wei-Kuo; Chern, Jiun-Dar; Wu, Di; Li, Xiaowen

    2015-01-01

    Numerous cloud microphysical schemes designed for cloud and mesoscale models are currently in use, ranging from simple bulk to multi-moment, multi-class to explicit bin schemes. This study details the benefits of adding a 4th ice class (hail) to an already improved 3-class ice bulk microphysics scheme developed for the Goddard Cumulus Ensemble model based on Rutledge and Hobbs (1983,1984). Besides the addition and modification of several hail processes from Lin et al. (1983), further modifications were made to the 3-ice processes, including allowing greater ice super saturation and mitigating spurious evaporationsublimation in the saturation adjustment scheme, allowing graupelhail to become snow via vapor growth and hail to become graupel via riming, and the inclusion of a rain evaporation correction and vapor diffusivity factor. The improved 3-ice snowgraupel size-mapping schemes were adjusted to be more stable at higher mixing rations and to increase the aggregation effect for snow. A snow density mapping was also added. The new scheme was applied to an intense continental squall line and a weaker, loosely-organized continental case using three different hail intercepts. Peak simulated reflectivities agree well with radar for both the intense and weaker case and were better than earlier 3-ice versions when using a moderate and large intercept for hail, respectively. Simulated reflectivity distributions versus height were also improved versus radar in both cases compared to earlier 3-ice versions. The bin-based rain evaporation correction affected the squall line case more but did not change the overall agreement in reflectivity distributions.

  4. Effect of synthetic jet modulation schemes on the reduction of a laminar separation bubble

    NASA Astrophysics Data System (ADS)

    Seo, J. H.; Cadieux, F.; Mittal, R.; Deem, E.; Cattafesta, L.

    2018-03-01

    The response of a laminar separation bubble to synthetic jet forcing with various modulation schemes is investigated via direct numerical simulations. A simple sinusoidal waveform is considered as a reference case, and various amplitude modulation schemes, including the square-wave "burst" modulation, are employed in the simulations. The results indicate that burst modulation is less effective at reducing the length of the flow separation than the sinusoidal forcing primarily because burst modulation is associated with a broad spectrum of input frequencies that are higher than the target frequency for the flow control. It is found that such high-frequency forcing delays vortex roll-up and promotes vortex pairing and merging, which have an adverse effect on reducing the separation bubble length. A commonly used amplitude modulation scheme is also found to have reduced effectiveness due to its spectral content. A new amplitude modulation scheme which is tailored to impart more energy at the target frequency is proposed and shown to be more effective than the other modulation schemes. Experimental measurements confirm that modulation schemes can be preserved through the actuator and used to enhance the energy content at the target modulation frequency. The present study therefore suggests that the effectiveness of synthetic jet-based flow control could be improved by carefully designing the spectral content of the modulation scheme.

  5. A displacement-based finite element formulation for incompressible and nearly-incompressible cardiac mechanics

    PubMed Central

    Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P.; Nordsletten, David A.

    2014-01-01

    The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii–Newton–Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics. PMID:25187672

  6. A displacement-based finite element formulation for incompressible and nearly-incompressible cardiac mechanics.

    PubMed

    Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P; Nordsletten, David A

    2014-06-01

    The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii-Newton-Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics.

  7. An Experimental Realization of a Chaos-Based Secure Communication Using Arduino Microcontrollers.

    PubMed

    Zapateiro De la Hoz, Mauricio; Acho, Leonardo; Vidal, Yolanda

    2015-01-01

    Security and secrecy are some of the important concerns in the communications world. In the last years, several encryption techniques have been proposed in order to improve the secrecy of the information transmitted. Chaos-based encryption techniques are being widely studied as part of the problem because of the highly unpredictable and random-look nature of the chaotic signals. In this paper we propose a digital-based communication system that uses the logistic map which is a mathematically simple model that is chaotic under certain conditions. The input message signal is modulated using a simple Delta modulator and encrypted using a logistic map. The key signal is also encrypted using the same logistic map with different initial conditions. In the receiver side, the binary-coded message is decrypted using the encrypted key signal that is sent through one of the communication channels. The proposed scheme is experimentally tested using Arduino shields which are simple yet powerful development kits that allows for the implementation of the communication system for testing purposes.

  8. A new routing enhancement scheme based on node blocking state advertisement in wavelength-routed WDM networks

    NASA Astrophysics Data System (ADS)

    Hu, Peigang; Jin, Yaohui; Zhang, Chunlei; He, Hao; Hu, WeiSheng

    2005-02-01

    The increasing switching capacity brings the optical node with considerable complexity. Due to the limitation in cost and technology, an optical node is often designed with partial switching capability and partial resource sharing. It means that the node is of blocking to some extent, for example multi-granularity switching node, which in fact is a structure using pass wavelength to reduce the dimension of OXC, and partial sharing wavelength converter (WC) OXC. It is conceivable that these blocking nodes will have great effects on the problem of routing and wavelength assignment. Some previous works studied the blocking case, partial WC OXC, using complicated wavelength assignment algorithm. But the complexities of these schemes decide them to be not in practice in real networks. In this paper, we propose a new scheme based on the node blocking state advertisement to reduce the retry or rerouting probability and improve the efficiency of routing in the networks with blocking nodes. In the scheme, node blocking state are advertised to the other nodes in networks, which will be used for subsequent route calculation to find a path with lowest blocking probability. The performance of the scheme is evaluated using discrete event model in 14-node NSFNET, all the nodes of which employ a kind of partial sharing WC OXC structure. In the simulation, a simple First-Fit wavelength assignment algorithm is used. The simulation results demonstrate that the new scheme considerably reduces the retry or rerouting probability in routing process.

  9. Sensitivity of measurement-based purification processes to inner interactions

    NASA Astrophysics Data System (ADS)

    Militello, Benedetto; Napoli, Anna

    2018-02-01

    The sensitivity of a repeated measurement-based purification scheme to additional undesired couplings is analyzed, focusing on the very simple and archetypical system consisting of two two-level systems interacting with a repeatedly measured one. Several regimes are considered and in the strong coupling limit (i.e., when the coupling constant of the undesired interaction is very large) the occurrence of a quantum Zeno effect is proven to dramatically jeopardize the efficiency of the purification process.

  10. Improving the power efficiency of SOA-based UWB over fiber systems via pulse shape randomization

    NASA Astrophysics Data System (ADS)

    Taki, H.; Azou, S.; Hamie, A.; Al Housseini, A.; Alaeddine, A.; Sharaiha, A.

    2016-09-01

    A simple pulse shape randomization scheme is considered in this paper for improving the performance of ultra wide band (UWB) communication systems using On Off Keying (OOK) or pulse position modulation (PPM) formats. The advantage of the proposed scheme, which can be either employed for impulse radio (IR) or for carrier-based systems, is first theoretically studied based on closed-form derivations of power spectral densities. Then, we investigate an application to an IR-UWB over optical fiber system, by utilizing the 4th and 5th orders of Gaussian derivatives. Our approach proves to be effective for 1 Gbps-PPM and 2 Gbps-OOK transmissions, with an advantage in terms of power efficiency for short distances. We also examine the performance for a system employing an in-line Semiconductor Optical Amplifier (SOA) with the view to achieve a reach extension, while limiting the cost and system complexity.

  11. Multichannel temperature controller for hot air solar house

    NASA Technical Reports Server (NTRS)

    Currie, J. R.

    1979-01-01

    This paper describes an electronic controller that is optimized to operate a hot air solar system. Thermal information is obtained from copper constantan thermocouples and a wall-type thermostat. The signals from the thermocouples are processed through a single amplifier using a multiplexing scheme. The multiplexing reduces the component count and automatically calibrates the thermocouple amplifier. The processed signals connect to some simple logic that selects one of the four operating modes. This simple, inexpensive, and reliable scheme is well suited to control hot air solar systems.

  12. The visual display of regulatory information and networks.

    PubMed

    Pirson, I; Fortemaison, N; Jacobs, C; Dremier, S; Dumont, J E; Maenhaut, C

    2000-10-01

    Cell regulation and signal transduction are becoming increasingly complex, with reports of new cross-signalling, feedback, and feedforward regulations between pathways and between the multiple isozymes discovered at each step of these pathways. However, this information, which requires pages of text for its description, can be summarized in very simple schemes, although there is no consensus on the drawing of such schemes. This article presents a simple set of rules that allows a lot of information to be inserted in easily understandable displays.

  13. An evaluation of the schemes of ocean surface albedo parameterization in shortwave radiation estimation

    NASA Astrophysics Data System (ADS)

    Niu, Hailin; Zhang, Xiaotong; Liu, Qiang; Feng, Youbin; Li, Xiuhong; Zhang, Jialin; Cai, Erli

    2015-12-01

    The ocean surface albedo (OSA) is a deciding factor on ocean net surface shortwave radiation (ONSSR) estimation. Several OSA schemes have been proposed successively, but there is not a conclusion for the best OSA scheme of estimating the ONSSR. On the base of analyzing currently existing OSA parameterization, including Briegleb et al.(B), Taylor et al.(T), Hansen et al.(H), Jin et al.(J), Preisendorfer and Mobley(PM86), Feng's scheme(F), this study discusses the difference of OSA's impact on ONSSR estimation in condition of actual downward shortwave radiation(DSR). Then we discussed the necessity and applicability for the climate models to integrate the more complicated OSA scheme. It is concluded that the SZA and the wind speed are the two most significant effect factor to broadband OSA, thus the different OSA parameterizations varies violently in the regions of both high latitudes and strong winds. The OSA schemes can lead the ONSSR results difference of the order of 20 w m-2. The Taylor's scheme shows the best estimate, and Feng's result just following Taylor's. However, the accuracy of the estimated instantaneous OSA changes at different local time. Jin's scheme has the best performance generally at noon and in the afternoon, and PM86's is the best of all in the morning, which indicate that the more complicated OSA schemes reflect the temporal variation of OWA better than the simple ones.

  14. System-wide hybrid MPC-PID control of a continuous pharmaceutical tablet manufacturing process via direct compaction.

    PubMed

    Singh, Ravendra; Ierapetritou, Marianthi; Ramachandran, Rohit

    2013-11-01

    The next generation of QbD based pharmaceutical products will be manufactured through continuous processing. This will allow the integration of online/inline monitoring tools, coupled with an efficient advanced model-based feedback control systems, to achieve precise control of process variables, so that the predefined product quality can be achieved consistently. The direct compaction process considered in this study is highly interactive and involves time delays for a number of process variables due to sensor placements, process equipment dimensions, and the flow characteristics of the solid material. A simple feedback regulatory control system (e.g., PI(D)) by itself may not be sufficient to achieve the tight process control that is mandated by regulatory authorities. The process presented herein comprises of coupled dynamics involving slow and fast responses, indicating the requirement of a hybrid control scheme such as a combined MPC-PID control scheme. In this manuscript, an efficient system-wide hybrid control strategy for an integrated continuous pharmaceutical tablet manufacturing process via direct compaction has been designed. The designed control system is a hybrid scheme of MPC-PID control. An effective controller parameter tuning strategy involving an ITAE method coupled with an optimization strategy has been used for tuning of both MPC and PID parameters. The designed hybrid control system has been implemented in a first-principles model-based flowsheet that was simulated in gPROMS (Process System Enterprise). Results demonstrate enhanced performance of critical quality attributes (CQAs) under the hybrid control scheme compared to only PID or MPC control schemes, illustrating the potential of a hybrid control scheme in improving pharmaceutical manufacturing operations. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Testing conceptual and physically based soil hydrology schemes against observations for the Amazon Basin

    NASA Astrophysics Data System (ADS)

    Guimberteau, M.; Ducharne, A.; Ciais, P.; Boisier, J. P.; Peng, S.; De Weirdt, M.; Verbeeck, H.

    2014-06-01

    This study analyzes the performance of the two soil hydrology schemes of the land surface model ORCHIDEE in estimating Amazonian hydrology and phenology for five major sub-basins (Xingu, Tapajós, Madeira, Solimões and Negro), during the 29-year period 1980-2008. A simple 2-layer scheme with a bucket topped by an evaporative layer is compared to an 11-layer diffusion scheme. The soil schemes are coupled with a river routing module and a process model of plant physiology, phenology and carbon dynamics. The simulated water budget and vegetation functioning components are compared with several data sets at sub-basin scale. The use of the 11-layer soil diffusion scheme does not significantly change the Amazonian water budget simulation when compared to the 2-layer soil scheme (+3.1 and -3.0% in evapotranspiration and river discharge, respectively). However, the higher water-holding capacity of the soil and the physically based representation of runoff and drainage in the 11-layer soil diffusion scheme result in more dynamic soil water storage variation and improved simulation of the total terrestrial water storage when compared to GRACE satellite estimates. The greater soil water storage within the 11-layer scheme also results in increased dry-season evapotranspiration (+0.5 mm d-1, +17%) and improves river discharge simulation in the southeastern sub-basins such as the Xingu. Evapotranspiration over this sub-basin is sustained during the whole dry season with the 11-layer soil diffusion scheme, whereas the 2-layer scheme limits it after only 2 dry months. Lower plant drought stress simulated by the 11-layer soil diffusion scheme leads to better simulation of the seasonal cycle of photosynthesis (GPP) when compared to a GPP data-driven model based on eddy covariance and satellite greenness measurements. A dry-season length between 4 and 7 months over the entire Amazon Basin is found to be critical in distinguishing differences in hydrological feedbacks between the soil and the vegetation cover simulated by the two soil schemes. On average, the multilayer soil diffusion scheme provides little improvement in simulated hydrology over the wet tropical Amazonian sub-basins, but a more significant improvement is found over the drier sub-basins. The use of a multilayer soil diffusion scheme might become critical for assessments of future hydrological changes, especially in southern regions of the Amazon Basin where longer dry seasons and more severe droughts are expected in the next century.

  16. Generalized type II hybrid ARQ scheme using punctured convolutional coding

    NASA Astrophysics Data System (ADS)

    Kallel, Samir; Haccoun, David

    1990-11-01

    A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.

  17. Belowground Controls on the Dynamics of Plant Communities

    NASA Astrophysics Data System (ADS)

    Sivandran, G.

    2013-12-01

    Arid regions are characterized by high variability in the arrival of rainfall, and species found in these areas have adapted mechanisms to ensure the capture of this scarce resource. In particular, the rooting strategies employed by vegetation can be critical to their survival. These rooting strategies also dictate the competitive outcomes within plant communities. A dynamic rooting scheme was incorporated into tRIBS+VEGGIE (a physically-based, distributed ecohydrologic model). The dynamic rooting scheme allows vegetation the freedom to alter its rooting profile in response to changes in rainfall and soil conditions, in a way that more closely mimics observed phenotypic plasticity. A simple competition-colonization model was combined with the new dynamic root scheme to explore the role of root adaptability in plant competition and landscape evolution in semi-arid environments. The influence of model representation of rooting strategy on the long term plant community composition

  18. A Penalty Method for the Numerical Solution of Hamilton-Jacobi-Bellman (HJB) Equations in Finance

    NASA Astrophysics Data System (ADS)

    Witte, J. H.; Reisinger, C.

    2010-09-01

    We present a simple and easy to implement method for the numerical solution of a rather general class of Hamilton-Jacobi-Bellman (HJB) equations. In many cases, the considered problems have only a viscosity solution, to which, fortunately, many intuitive (e.g. finite difference based) discretisations can be shown to converge. However, especially when using fully implicit time stepping schemes with their desireable stability properties, one is still faced with the considerable task of solving the resulting nonlinear discrete system. In this paper, we introduce a penalty method which approximates the nonlinear discrete system to an order of O(1/ρ), where ρ>0 is the penalty parameter, and we show that an iterative scheme can be used to solve the penalised discrete problem in finitely many steps. We include a number of examples from mathematical finance for which the described approach yields a rigorous numerical scheme and present numerical results.

  19. A novel double loop control model design for chemical unstable processes.

    PubMed

    Cong, Er-Ding; Hu, Ming-Hui; Tu, Shan-Tung; Xuan, Fu-Zhen; Shao, Hui-He

    2014-03-01

    In this manuscript, based on Smith predictor control scheme for unstable process in industry, an improved double loop control model is proposed for chemical unstable processes. Inner loop is to stabilize integrating the unstable process and transform the original process to first-order plus pure dead-time dynamic stable process. Outer loop is to enhance the performance of set point response. Disturbance controller is designed to enhance the performance of disturbance response. The improved control system is simple with exact physical meaning. The characteristic equation is easy to realize stabilization. Three controllers are separately design in the improved scheme. It is easy to design each controller and good control performance for the respective closed-loop transfer function separately. The robust stability of the proposed control scheme is analyzed. Finally, case studies illustrate that the improved method can give better system performance than existing design methods. © 2013 ISA Published by ISA All rights reserved.

  20. Content-based unconstrained color logo and trademark retrieval with color edge gradient co-occurrence histograms

    NASA Astrophysics Data System (ADS)

    Phan, Raymond; Androutsos, Dimitrios

    2008-01-01

    In this paper, we present a logo and trademark retrieval system for unconstrained color image databases that extends the Color Edge Co-occurrence Histogram (CECH) object detection scheme. We introduce more accurate information to the CECH, by virtue of incorporating color edge detection using vector order statistics. This produces a more accurate representation of edges in color images, in comparison to the simple color pixel difference classification of edges as seen in the CECH. Our proposed method is thus reliant on edge gradient information, and as such, we call this the Color Edge Gradient Co-occurrence Histogram (CEGCH). We use this as the main mechanism for our unconstrained color logo and trademark retrieval scheme. Results illustrate that the proposed retrieval system retrieves logos and trademarks with good accuracy, and outperforms the CECH object detection scheme with higher precision and recall.

  1. Fraction number of trapped atoms and velocity distribution function in sub-recoil laser cooling scheme

    NASA Astrophysics Data System (ADS)

    Alekseev, V. A.; Krylova, D. D.

    1996-02-01

    The analytical investigation of Bloch equations is used to describe the main features of the 1D velocity selective coherent population trapping cooling scheme. For the initial stage of cooling the fraction of cooled atoms is derived in the case of a Gaussian initial velocity distribution. At very long times of interaction the fraction of cooled atoms and the velocity distribution function are described by simple analytical formulae and do not depend on the initial distribution. These results are in good agreement with those of Bardou, Bouchaud, Emile, Aspect and Cohen-Tannoudji based on statistical analysis in terms of Levy flights and with Monte-Carlo simulations of the process.

  2. An effective write policy for software coherence schemes

    NASA Technical Reports Server (NTRS)

    Chen, Yung-Chin; Veidenbaum, Alexander V.

    1992-01-01

    The authors study the write behavior and evaluate the performance of various write strategies and buffering techniques for a MIN-based multiprocessor system using the simple software coherence scheme. Hit ratios, memory latencies, total execution time, and total write traffic are used as the performance indices. The write-through write-allocate no-fetch cache using a write-back write buffer is shown to have a better performance than both write-through and write-back caches. This type of write buffer is effective in reducing the volume as well as bursts of write traffic. On average, the use of a write-back cache reduces by 60 percent the total write traffic generated by a write-through cache.

  3. ExoCross: Spectra from molecular line lists

    NASA Astrophysics Data System (ADS)

    Yurchenko, Sergei N.; Al-Refaie, Ahmed; Tennyson, Jonathan

    2018-03-01

    ExoCross generates spectra and thermodynamic properties from molecular line lists in ExoMol, HITRAN, or several other formats. The code is parallelized and also shows a high degree of vectorization; it works with line profiles such as Doppler, Lorentzian and Voigt and supports several broadening schemes. ExoCross is also capable of working with the recently proposed method of super-lines. It supports calculations of lifetimes, cooling functions, specific heats and other properties. ExoCross converts between different formats, such as HITRAN, ExoMol and Phoenix, and simulates non-LTE spectra using a simple two-temperature approach. Different electronic, vibronic or vibrational bands can be simulated separately using an efficient filtering scheme based on the quantum numbers.

  4. Thermal-Performance Instability in Piezoresistive Sensors: Inducement and Improvement

    PubMed Central

    Liu, Yan; Wang, Hai; Zhao, Wei; Qin, Hongbo; Fang, Xuan

    2016-01-01

    The field of piezoresistive sensors has been undergoing a significant revolution in terms of design methodology, material technology and micromachining process. However, the temperature dependence of sensor characteristics remains a hurdle to cross. This review focuses on the issues in thermal-performance instability of piezoresistive sensors. Based on the operation fundamental, inducements to the instability are investigated in detail and correspondingly available ameliorative methods are presented. Pros and cons of each improvement approach are also summarized. Though several schemes have been proposed and put into reality with favorable achievements, the schemes featuring simple implementation and excellent compatibility with existing techniques are still emergently demanded to construct a piezoresistive sensor with excellent comprehensive performance. PMID:27886125

  5. Stable radio frequency dissemination by simple hybrid frequency modulation scheme.

    PubMed

    Yu, Longqiang; Wang, Rong; Lu, Lin; Zhu, Yong; Wu, Chuanxin; Zhang, Baofu; Wang, Peizhang

    2014-09-15

    In this Letter, we propose a fiber-based stable radio frequency transfer system by a hybrid frequency modulation scheme. Creatively, two radio frequency signals are combined and simultaneously transferred by only one laser diode. One frequency component is used to detect the phase fluctuation, and the other one is the derivative compensated signal providing a stable frequency for the remote end. A proper ratio of the frequencies of the components is well maintained by parameter m to avoid interference between them. Experimentally, a stable 200 MHz signal is transferred over 100 km optical fiber with the help of a 1 GHz detecting signal, and fractional instability of 2×10(-17) at 10(5) s is achieved.

  6. High-power Yb-fiber comb with feed-forward control of nonlinear-polarization-rotation mode-locking and large-mode-area fiber amplification.

    PubMed

    Yan, Ming; Li, Wenxue; Yang, Kangwen; Zhou, Hui; Shen, Xuling; Zhou, Qian; Ru, Qitian; Bai, Dongbi; Zeng, Heping

    2012-05-01

    We report on a simple scheme to precisely control carrier-envelope phase of a nonlinear-polarization-rotation mode-locked self-started Yb-fiber laser system with an average output power of ∼7  W and a pulse width of 130 fs. The offset frequency was locked to the repetition rate of ∼64.5  MHz with a relative linewidth of ∼1.4  MHz by using a self-referenced feed-forward scheme based on an acousto-optic frequency shifter. The phase noise and timing jitter were calculated to be 370 mrad and 120 as, respectively.

  7. Simple wavefront correction framework for two-photon microscopy of in-vivo brain

    PubMed Central

    Galwaduge, P. T.; Kim, S. H.; Grosberg, L. E.; Hillman, E. M. C.

    2015-01-01

    We present an easily implemented wavefront correction scheme that has been specifically designed for in-vivo brain imaging. The system can be implemented with a single liquid crystal spatial light modulator (LCSLM), which makes it compatible with existing patterned illumination setups, and provides measurable signal improvements even after a few seconds of optimization. The optimization scheme is signal-based and does not require exogenous guide-stars, repeated image acquisition or beam constraint. The unconstrained beam approach allows the use of Zernike functions for aberration correction and Hadamard functions for scattering correction. Low order corrections performed in mouse brain were found to be valid up to hundreds of microns away from the correction location. PMID:26309763

  8. Quantum locking of mirrors in interferometers.

    PubMed

    Courty, Jean-Michel; Heidmann, Antoine; Pinard, Michel

    2003-02-28

    We show that quantum noise in very sensitive interferometric measurements such as gravitational-wave detectors can be drastically modified by quantum feedback. We present a new scheme based on active control to lock the motion of a mirror to a reference mirror at the quantum level. This simple technique allows one to reduce quantum effects of radiation pressure and to greatly enhance the sensitivity of the detection.

  9. Uplink Packet-Data Scheduling in DS-CDMA Systems

    NASA Astrophysics Data System (ADS)

    Choi, Young Woo; Kim, Seong-Lyun

    In this letter, we consider the uplink packet scheduling for non-real-time data users in a DS-CDMA system. As an effort to jointly optimize throughput and fairness, we formulate a time-span minimization problem incorporating the time-multiplexing of different simultaneous transmission schemes. Based on simple rules, we propose efficient scheduling algorithms and compare them with the optimal solution obtained by linear programming.

  10. Explicit Low-Thrust Guidance for Reference Orbit Targeting

    NASA Technical Reports Server (NTRS)

    Lam, Try; Udwadia, Firdaus E.

    2013-01-01

    The problem of a low-thrust spacecraft controlled to a reference orbit is addressed in this paper. A simple and explicit low-thrust guidance scheme with constrained thrust magnitude is developed by combining the fundamental equations of motion for constrained systems from analytical dynamics with a Lyapunov-based method. Examples are given for a spacecraft controlled to a reference trajectory in the circular restricted three body problem.

  11. Severe snow loads on mountain afforestation in Japan

    Treesearch

    Ryuzo Nitta; Yoshio Ozeki; Shoichi Niwano

    1991-01-01

    A simple device for estimating snow settling force on tree branches was used to determine the distribution of snow settling force at various heights in a snowy mountainous region in Japan. A trapezoidal distribution of snow settling force was found to exist at all sites tested. It is thought that a zoning scheme based on the damaging potential of snow on young man-made...

  12. Ab initio optimization principle for the ground states of translationally invariant strongly correlated quantum lattice models.

    PubMed

    Ran, Shi-Ju

    2016-05-01

    In this work, a simple and fundamental numeric scheme dubbed as ab initio optimization principle (AOP) is proposed for the ground states of translational invariant strongly correlated quantum lattice models. The idea is to transform a nondeterministic-polynomial-hard ground-state simulation with infinite degrees of freedom into a single optimization problem of a local function with finite number of physical and ancillary degrees of freedom. This work contributes mainly in the following aspects: (1) AOP provides a simple and efficient scheme to simulate the ground state by solving a local optimization problem. Its solution contains two kinds of boundary states, one of which play the role of the entanglement bath that mimics the interactions between a supercell and the infinite environment, and the other gives the ground state in a tensor network (TN) form. (2) In the sense of TN, a novel decomposition named as tensor ring decomposition (TRD) is proposed to implement AOP. Instead of following the contraction-truncation scheme used by many existing TN-based algorithms, TRD solves the contraction of a uniform TN in an opposite way by encoding the contraction in a set of self-consistent equations that automatically reconstruct the whole TN, making the simulation simple and unified; (3) AOP inherits and develops the ideas of different well-established methods, including the density matrix renormalization group (DMRG), infinite time-evolving block decimation (iTEBD), network contractor dynamics, density matrix embedding theory, etc., providing a unified perspective that is previously missing in this fields. (4) AOP as well as TRD give novel implications to existing TN-based algorithms: A modified iTEBD is suggested and the two-dimensional (2D) AOP is argued to be an intrinsic 2D extension of DMRG that is based on infinite projected entangled pair state. This paper is focused on one-dimensional quantum models to present AOP. The benchmark is given on a transverse Ising chain and 2D classical Ising model, showing the remarkable efficiency and accuracy of the AOP.

  13. Early forest fire detection using principal component analysis of infrared video

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Radjabi, Ryan; Jacobs, John T.

    2011-09-01

    A land-based early forest fire detection scheme which exploits the infrared (IR) temporal signature of fire plume is described. Unlike common land-based and/or satellite-based techniques which rely on measurement and discrimination of fire plume directly from its infrared and/or visible reflectance imagery, this scheme is based on exploitation of fire plume temporal signature, i.e., temperature fluctuations over the observation period. The method is simple and relatively inexpensive to implement. The false alarm rate is expected to be lower that of the existing methods. Land-based infrared (IR) cameras are installed in a step-stare-mode configuration in potential fire-prone areas. The sequence of IR video frames from each camera is digitally processed to determine if there is a fire within camera's field of view (FOV). The process involves applying a principal component transformation (PCT) to each nonoverlapping sequence of video frames from the camera to produce a corresponding sequence of temporally-uncorrelated principal component (PC) images. Since pixels that form a fire plume exhibit statistically similar temporal variation (i.e., have a unique temporal signature), PCT conveniently renders the footprint/trace of the fire plume in low-order PC images. The PC image which best reveals the trace of the fire plume is then selected and spatially filtered via simple threshold and median filter operations to remove the background clutter, such as traces of moving tree branches due to wind.

  14. Linear segmentation algorithm for detecting layer boundary with lidar.

    PubMed

    Mao, Feiyue; Gong, Wei; Logan, Timothy

    2013-11-04

    The automatic detection of aerosol- and cloud-layer boundary (base and top) is important in atmospheric lidar data processing, because the boundary information is not only useful for environment and climate studies, but can also be used as input for further data processing. Previous methods have demonstrated limitations in defining the base and top, window-size setting, and have neglected the in-layer attenuation. To overcome these limitations, we present a new layer detection scheme for up-looking lidars based on linear segmentation with a reasonable threshold setting, boundary selecting, and false positive removing strategies. Preliminary results from both real and simulated data show that this algorithm cannot only detect the layer-base as accurate as the simple multi-scale method, but can also detect the layer-top more accurately than that of the simple multi-scale method. Our algorithm can be directly applied to uncalibrated data without requiring any additional measurements or window size selections.

  15. Coherent anti-Stokes Raman scattering spectroscope/microscope based on a widely tunable laser source

    NASA Astrophysics Data System (ADS)

    Dementjev, A.; Gulbinas, V.; Serbenta, A.; Kaucikas, M.; Niaura, G.

    2010-03-01

    We present a coherent anti-Stokes Raman scattering (CARS) microscope based on a robust and simple laser source. A picosecond laser operating in a cavity dumping regime at the 1 MHz repetition rate was used to pump a traveling wave optical parametric generator, which serves as a two-color excitation light source for the CARS microscope. We demonstrate the ability of the presented CARS microscope to measure CARS spectra and images by using several detection schemes.

  16. Passive demodulation of miniature fiber-optic-based interferometric sensors using a time-multiplexing technique.

    PubMed

    Santos, J L; Jackson, D A

    1991-08-01

    A passive demodulation technique suitable for interferometric interrogation of short optical cavities is described. It is based on time multiplexing of two low-finesse Fabry-Perot interferometers subject to the same measurand and with a differential optical phase of pi/2 (modulo 2pi). Independently of the cavity length, two optical outputs in quadrature are generated, which permits signal reading free of fading. The concept is demonstrated for the measurement of vibration using a simple processing scheme.

  17. Nearly deterministic quantum Fredkin gate based on weak cross-Kerr nonlinearity

    NASA Astrophysics Data System (ADS)

    Wu, Yun-xiang; Zhu, Chang-hua; Pei, Chang-xing

    2016-09-01

    A scheme of an optical quantum Fredkin gate is presented based on weak cross-Kerr nonlinearity. By an auxiliary coherent state with the cross-Kerr nonlinearity effect, photons can interact with each other indirectly, and a non-demolition measurement for photons can be implemented. Combined with the homodyne detection, classical feedforward, polarization beam splitters and Pauli-X operations, a controlled-path gate is constructed. Furthermore, a quantum Fredkin gate is built based on the controlled-path gate. The proposed Fredkin gate is simple in structure and feasible by current experimental technology.

  18. Secure quantum key distribution using continuous variables of single photons.

    PubMed

    Zhang, Lijian; Silberhorn, Christine; Walmsley, Ian A

    2008-03-21

    We analyze the distribution of secure keys using quantum cryptography based on the continuous variable degree of freedom of entangled photon pairs. We derive the information capacity of a scheme based on the spatial entanglement of photons from a realistic source, and show that the standard measures of security known for quadrature-based continuous variable quantum cryptography (CV-QKD) are inadequate. A specific simple eavesdropping attack is analyzed to illuminate how secret information may be distilled well beyond the bounds of the usual CV-QKD measures.

  19. Experimental study of an optimized PSP-OSTBC scheme with m-PPM in ultraviolet scattering channel for optical MIMO system.

    PubMed

    Han, Dahai; Gu, Yanjie; Zhang, Min

    2017-08-10

    An optimized scheme of pulse symmetrical position-orthogonal space-time block codes (PSP-OSTBC) is proposed and applied with m-pulse positions modulation (m-PPM) without the use of a complex decoding algorithm in an optical multi-input multi-output (MIMO) ultraviolet (UV) communication system. The proposed scheme breaks through the limitation of the traditional Alamouti code and is suitable for high-order m-PPM in a UV scattering channel, verified by both simulation experiments and field tests with specific parameters. The performances of 1×1, 2×1, and 2×2 PSP-OSTBC systems with 4-PPM are compared experimentally as the optimal tradeoff between modification and coding in practical application. Meanwhile, the feasibility of the proposed scheme for 8-PPM is examined by a simulation experiment as well. The results suggest that the proposed scheme makes the system insensitive to the influence of path loss with a larger channel capacity, and a higher diversity gain and coding gain with a simple decoding algorithm will be achieved by employing the orthogonality of m-PPM in an optical-MIMO-based ultraviolet scattering channel.

  20. A Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    PubMed

    Buyens, Wim; van Dijk, Bas; Wouters, Jan; Moonen, Marc

    2015-10-01

    Listening to music is still one of the more challenging aspects of using a cochlear implant (CI) for most users. Simple musical structures, a clear rhythm/beat, and lyrics that are easy to follow are among the top factors contributing to music appreciation for CI users. Modifying the audio mix of complex music potentially improves music enjoyment in CI users. A stereo music preprocessing scheme is described in which vocals, drums, and bass are emphasized based on the representation of the harmonic and the percussive components in the input spectrogram, combined with the spatial allocation of instruments in typical stereo recordings. The scheme is assessed with postlingually deafened CI subjects (N = 7) using pop/rock music excerpts with different complexity levels. The scheme is capable of modifying relative instrument level settings, with the aim of improving music appreciation in CI users, and allows individual preference adjustments. The assessment with CI subjects confirms the preference for more emphasis on vocals, drums, and bass as offered by the preprocessing scheme, especially for songs with higher complexity. The stereo music preprocessing scheme has the potential to improve music enjoyment in CI users by modifying the audio mix in widespread (stereo) music recordings. Since music enjoyment in CI users is generally poor, this scheme can assist the music listening experience of CI users as a training or rehabilitation tool.

  1. The Reconstruction Problem Revisited

    NASA Technical Reports Server (NTRS)

    Suresh, Ambaby

    1999-01-01

    The role of reconstruction in avoiding oscillations in upwind schemes is reexamined, with the aim of providing simple, concise proofs. In one dimension, it is shown that if the reconstruction is any arbitrary function bounded by neighboring cell averages and increasing within a cell for increasing data, the resulting scheme is monotonicity preserving, even though the reconstructed function may have overshoots and undershoots at the cell edges and is in general not a monotone function. In the special case of linear reconstruction, it is shown that merely bounding the reconstruction between neighboring cell averages is sufficient to obtain a monotonicity preservinc,y scheme. In two dimensions, it is shown that some ID TVD limiters applied in each direction result in schemes that are not positivity preserving, i.e. do not give positive updates when the data are positive. A simple proof is given to show that if the reconstruction inside the cell is bounded by the neighboring cell averages (including corner neighbors), then the scheme is positivity preserving. A new limiter that enforces this condition but is not as dissipative as the Minmod limiter is also presented.

  2. A deterministic Lagrangian particle separation-based method for advective-diffusion problems

    NASA Astrophysics Data System (ADS)

    Wong, Ken T. M.; Lee, Joseph H. W.; Choi, K. W.

    2008-12-01

    A simple and robust Lagrangian particle scheme is proposed to solve the advective-diffusion transport problem. The scheme is based on relative diffusion concepts and simulates diffusion by regulating particle separation. This new approach generates a deterministic result and requires far less number of particles than the random walk method. For the advection process, particles are simply moved according to their velocity. The general scheme is mass conservative and is free from numerical diffusion. It can be applied to a wide variety of advective-diffusion problems, but is particularly suited for ecological and water quality modelling when definition of particle attributes (e.g., cell status for modelling algal blooms or red tides) is a necessity. The basic derivation, numerical stability and practical implementation of the NEighborhood Separation Technique (NEST) are presented. The accuracy of the method is demonstrated through a series of test cases which embrace realistic features of coastal environmental transport problems. Two field application examples on the tidal flushing of a fish farm and the dynamics of vertically migrating marine algae are also presented.

  3. Dynamo-based scheme for forecasting the magnitude of solar activity cycles

    NASA Technical Reports Server (NTRS)

    Layden, A. C.; Fox, P. A.; Howard, J. M.; Sarajedini, A.; Schatten, K. H.

    1991-01-01

    This paper presents a general framework for forecasting the smoothed maximum level of solar activity in a given cycle, based on a simple understanding of the solar dynamo. This type of forecasting requires knowledge of the sun's polar magnetic field strength at the preceding activity minimum. Because direct measurements of this quantity are difficult to obtain, the quality of a number of proxy indicators already used by other authors is evaluated, which are physically related to the sun's polar field. These indicators are subjected to a rigorous statistical analysis, and the analysis technique for each indicator is specified in detail in order to simplify and systematize reanalysis for future use. It is found that several of these proxies are in fact poorly correlated or uncorrelated with solar activity, and thus are of little value for predicting activity maxima. Also presented is a scheme in which the predictions of the individual proxies are combined via an appropriately weighted mean to produce a compound prediction. The scheme is then applied to the current cycle 22, and a maximum smoothed international sunspot number of 171 + or - 26 is estimated.

  4. Additive schemes for certain operator-differential equations

    NASA Astrophysics Data System (ADS)

    Vabishchevich, P. N.

    2010-12-01

    Unconditionally stable finite difference schemes for the time approximation of first-order operator-differential systems with self-adjoint operators are constructed. Such systems arise in many applied problems, for example, in connection with nonstationary problems for the system of Stokes (Navier-Stokes) equations. Stability conditions in the corresponding Hilbert spaces for two-level weighted operator-difference schemes are obtained. Additive (splitting) schemes are proposed that involve the solution of simple problems at each time step. The results are used to construct splitting schemes with respect to spatial variables for nonstationary Navier-Stokes equations for incompressible fluid. The capabilities of additive schemes are illustrated using a two-dimensional model problem as an example.

  5. Adaptive fuzzy-neural-network control for maglev transportation system.

    PubMed

    Wai, Rong-Jong; Lee, Jeng-Dao

    2008-01-01

    A magnetic-levitation (maglev) transportation system including levitation and propulsion control is a subject of considerable scientific interest because of highly nonlinear and unstable behaviors. In this paper, the dynamic model of a maglev transportation system including levitated electromagnets and a propulsive linear induction motor (LIM) based on the concepts of mechanical geometry and motion dynamics is developed first. Then, a model-based sliding-mode control (SMC) strategy is introduced. In order to alleviate chattering phenomena caused by the inappropriate selection of uncertainty bound, a simple bound estimation algorithm is embedded in the SMC strategy to form an adaptive sliding-mode control (ASMC) scheme. However, this estimation algorithm is always a positive value so that tracking errors introduced by any uncertainty will cause the estimated bound increase even to infinity with time. Therefore, it further designs an adaptive fuzzy-neural-network control (AFNNC) scheme by imitating the SMC strategy for the maglev transportation system. In the model-free AFNNC, online learning algorithms are designed to cope with the problem of chattering phenomena caused by the sign action in SMC design, and to ensure the stability of the controlled system without the requirement of auxiliary compensated controllers despite the existence of uncertainties. The outputs of the AFNNC scheme can be directly supplied to the electromagnets and LIM without complicated control transformations for relaxing strict constrains in conventional model-based control methodologies. The effectiveness of the proposed control schemes for the maglev transportation system is verified by numerical simulations, and the superiority of the AFNNC scheme is indicated in comparison with the SMC and ASMC strategies.

  6. Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.

    PubMed

    Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann

    2017-01-01

    Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.

  7. The impact of catchment source group classification on the accuracy of sediment fingerprinting outputs.

    PubMed

    Pulley, Simon; Foster, Ian; Collins, Adrian L

    2017-06-01

    The objective classification of sediment source groups is at present an under-investigated aspect of source tracing studies, which has the potential to statistically improve discrimination between sediment sources and reduce uncertainty. This paper investigates this potential using three different source group classification schemes. The first classification scheme was simple surface and subsurface groupings (Scheme 1). The tracer signatures were then used in a two-step cluster analysis to identify the sediment source groupings naturally defined by the tracer signatures (Scheme 2). The cluster source groups were then modified by splitting each one into a surface and subsurface component to suit catchment management goals (Scheme 3). The schemes were tested using artificial mixtures of sediment source samples. Controlled corruptions were made to some of the mixtures to mimic the potential causes of tracer non-conservatism present when using tracers in natural fluvial environments. It was determined how accurately the known proportions of sediment sources in the mixtures were identified after unmixing modelling using the three classification schemes. The cluster analysis derived source groups (2) significantly increased tracer variability ratios (inter-/intra-source group variability) (up to 2122%, median 194%) compared to the surface and subsurface groupings (1). As a result, the composition of the artificial mixtures was identified an average of 9.8% more accurately on the 0-100% contribution scale. It was found that the cluster groups could be reclassified into a surface and subsurface component (3) with no significant increase in composite uncertainty (a 0.1% increase over Scheme 2). The far smaller effects of simulated tracer non-conservatism for the cluster analysis based schemes (2 and 3) was primarily attributed to the increased inter-group variability producing a far larger sediment source signal that the non-conservatism noise (1). Modified cluster analysis based classification methods have the potential to reduce composite uncertainty significantly in future source tracing studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Aerosol Complexity and Implications for Predictability and Short-Term Forecasting

    NASA Technical Reports Server (NTRS)

    Colarco, Peter

    2016-01-01

    There are clear NWP and climate impacts from including aerosol radiative and cloud interactions. Changes in dynamics and cloud fields affect aerosol lifecycle, plume height, long-range transport, overall forcing of the climate system, etc. Inclusion of aerosols in NWP systems has benefit to surface field biases (e.g., T2m, U10m). Including aerosol affects has impact on analysis increments and can have statistically significant impacts on, e.g., tropical cyclogenesis. Above points are made especially with respect to aerosol radiative interactions, but aerosol-cloud interaction is a bigger signal on the global system. Many of these impacts are realized even in models with relatively simple (bulk) aerosol schemes (approx.10 -20 tracers). Simple schemes though imply simple representation of aerosol absorption and importantly for aerosol-cloud interaction particle-size distribution. Even so, more complex schemes exhibit a lot of diversity between different models, with issues such as size selection both for emitted particles and for modes. Prospects for complex sectional schemes to tune modal (and even bulk) schemes toward better selection of size representation. I think this is a ripe topic for more research -Systematic documentation of benefits of no vs. climatological vs. interactive (direct and then direct+indirect) aerosols. Document aerosol impact on analysis increments, inclusion in NWP data assimilation operator -Further refinement of baseline assumptions in model design (e.g., absorption, particle size distribution). Did not get into model resolution and interplay of other physical processes with aerosols (e.g., moist physics, obviously important), chemistry

  9. An adaptive vector quantization scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1990-01-01

    Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.

  10. Isca, v1.0: a framework for the global modelling of the atmospheres of Earth and other planets at varying levels of complexity

    NASA Astrophysics Data System (ADS)

    Vallis, Geoffrey K.; Colyer, Greg; Geen, Ruth; Gerber, Edwin; Jucker, Martin; Maher, Penelope; Paterson, Alexander; Pietschnig, Marianne; Penn, James; Thomson, Stephen I.

    2018-03-01

    Isca is a framework for the idealized modelling of the global circulation of planetary atmospheres at varying levels of complexity and realism. The framework is an outgrowth of models from the Geophysical Fluid Dynamics Laboratory in Princeton, USA, designed for Earth's atmosphere, but it may readily be extended into other planetary regimes. Various forcing and radiation options are available, from dry, time invariant, Newtonian thermal relaxation to moist dynamics with radiative transfer. Options are available in the dry thermal relaxation scheme to account for the effects of obliquity and eccentricity (and so seasonality), different atmospheric optical depths and a surface mixed layer. An idealized grey radiation scheme, a two-band scheme, and a multiband scheme are also available, all with simple moist effects and astronomically based solar forcing. At the complex end of the spectrum the framework provides a direct connection to comprehensive atmospheric general circulation models. For Earth modelling, options include an aquaplanet and configurable continental outlines and topography. Continents may be defined by changing albedo, heat capacity, and evaporative parameters and/or by using a simple bucket hydrology model. Oceanic Q fluxes may be added to reproduce specified sea surface temperatures, with arbitrary continental distributions. Planetary atmospheres may be configured by changing planetary size and mass, solar forcing, atmospheric mass, radiation, and other parameters. Examples are given of various Earth configurations as well as a giant planet simulation, a slowly rotating terrestrial planet simulation, and tidally locked and other orbitally resonant exoplanet simulations. The underlying model is written in Fortran and may largely be configured with Python scripts. Python scripts are also used to run the model on different architectures, to archive the output, and for diagnostics, graphics, and post-processing. All of these features are publicly available in a Git-based repository.

  11. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less

  12. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    NASA Astrophysics Data System (ADS)

    Spackman, Peter R.; Karton, Amir

    2015-05-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.

  13. Two-photon Shack-Hartmann wavefront sensor.

    PubMed

    Xia, Fei; Sinefeld, David; Li, Bo; Xu, Chris

    2017-03-15

    We introduce a simple wavefront sensing scheme for aberration measurement of pulsed laser beams in near-infrared wavelengths (<2200  nm), where detectors are not always available or are very expensive. The method is based on two-photon absorption in a silicon detector array for longer wavelengths detection. We demonstrate the simplicity of such implementations with a commercially available Shack-Hartmann wavefront sensor and discuss the detection sensitivity of this method.

  14. Simple and high-speed polarization-based QKD

    NASA Astrophysics Data System (ADS)

    Grünenfelder, Fadri; Boaron, Alberto; Rusca, Davide; Martin, Anthony; Zbinden, Hugo

    2018-01-01

    We present a simplified BB84 protocol with only three quantum states and one decoy-state level. We implement this scheme using the polarization degree of freedom at telecom wavelength. Only one pulsed laser is used in order to reduce possible side-channel attacks. The repetition rate of 625 MHz and the achieved secret bit rate of 23 bps over 200 km of standard fiber are the actual state of the art.

  15. A Unified Approach to Motion Control of Motion Robots

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1994-01-01

    This paper presents a simple on-line approach for motion control of mobile robots made up of a manipulator arm mounted on a mobile base. The proposed approach is equally applicable to nonholonomic mobile robots, such as rover-mounted manipulators and to holonomic mobile robots such as tracked robots or compound manipulators. The computational efficiency of the proposed control scheme makes it particularly suitable for real-time implementation.

  16. A Simple Qualitative Analysis Scheme for Several Environmentally Important Elements

    ERIC Educational Resources Information Center

    Lambert, Jack L.; Meloan, Clifton E.

    1977-01-01

    Describes a scheme that uses precipitation, gas evolution, complex ion formation, and flame tests to analyze for the following ions: Hg(I), Hg(II), Sb(III), Cr(III), Pb(II), Sr(II), Cu(II), Cd(II), As(III), chloride, nitrate, and sulfate. (MLH)

  17. To sort or not to sort: the impact of spike-sorting on neural decoding performance.

    PubMed

    Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie

    2014-10-01

    Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.

  18. To sort or not to sort: the impact of spike-sorting on neural decoding performance

    NASA Astrophysics Data System (ADS)

    Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie

    2014-10-01

    Objective. Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. Approach. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Main results. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Significance. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.

  19. The Power of Proofs-of-Possession: Securing Multiparty Signatures against Rogue-Key Attacks

    NASA Astrophysics Data System (ADS)

    Ristenpart, Thomas; Yilek, Scott

    Multiparty signature protocols need protection against rogue-key attacks, made possible whenever an adversary can choose its public key(s) arbitrarily. For many schemes, provable security has only been established under the knowledge of secret key (KOSK) assumption where the adversary is required to reveal the secret keys it utilizes. In practice, certifying authorities rarely require the strong proofs of knowledge of secret keys required to substantiate the KOSK assumption. Instead, proofs of possession (POPs) are required and can be as simple as just a signature over the certificate request message. We propose a general registered key model, within which we can model both the KOSK assumption and in-use POP protocols. We show that simple POP protocols yield provable security of Boldyreva's multisignature scheme [11], the LOSSW multisignature scheme [28], and a 2-user ring signature scheme due to Bender, Katz, and Morselli [10]. Our results are the first to provide formal evidence that POPs can stop rogue-key attacks.

  20. An Experimental Realization of a Chaos-Based Secure Communication Using Arduino Microcontrollers

    PubMed Central

    Zapateiro De la Hoz, Mauricio; Vidal, Yolanda

    2015-01-01

    Security and secrecy are some of the important concerns in the communications world. In the last years, several encryption techniques have been proposed in order to improve the secrecy of the information transmitted. Chaos-based encryption techniques are being widely studied as part of the problem because of the highly unpredictable and random-look nature of the chaotic signals. In this paper we propose a digital-based communication system that uses the logistic map which is a mathematically simple model that is chaotic under certain conditions. The input message signal is modulated using a simple Delta modulator and encrypted using a logistic map. The key signal is also encrypted using the same logistic map with different initial conditions. In the receiver side, the binary-coded message is decrypted using the encrypted key signal that is sent through one of the communication channels. The proposed scheme is experimentally tested using Arduino shields which are simple yet powerful development kits that allows for the implementation of the communication system for testing purposes. PMID:26413563

  1. Design and implementation of adaptive PI control schemes for web tension control in roll-to-roll (R2R) manufacturing.

    PubMed

    Raul, Pramod R; Pagilla, Prabhakar R

    2015-05-01

    In this paper, two adaptive Proportional-Integral (PI) control schemes are designed and discussed for control of web tension in Roll-to-Roll (R2R) manufacturing systems. R2R systems are used to transport continuous materials (called webs) on rollers from the unwind roll to the rewind roll. Maintaining web tension at the desired value is critical to many R2R processes such as printing, coating, lamination, etc. Existing fixed gain PI tension control schemes currently used in industrial practice require extensive tuning and do not provide the desired performance for changing operating conditions and material properties. The first adaptive PI scheme utilizes the model reference approach where the controller gains are estimated based on matching of the actual closed-loop tension control systems with an appropriately chosen reference model. The second adaptive PI scheme utilizes the indirect adaptive control approach together with relay feedback technique to automatically initialize the adaptive PI gains. These adaptive tension control schemes can be implemented on any R2R manufacturing system. The key features of the two adaptive schemes is that their designs are simple for practicing engineers, easy to implement in real-time, and automate the tuning process. Extensive experiments are conducted on a large experimental R2R machine which mimics many features of an industrial R2R machine. These experiments include trials with two different polymer webs and a variety of operating conditions. Implementation guidelines are provided for both adaptive schemes. Experimental results comparing the two adaptive schemes and a fixed gain PI tension control scheme used in industrial practice are provided and discussed. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Bending and buckling formulation of graphene sheets based on nonlocal simple first-order shear deformation theory

    NASA Astrophysics Data System (ADS)

    Golmakani, M. E.; Malikan, M.; Sadraee Far, M. N.; Majidi, H. R.

    2018-06-01

    This paper presents a formulation based on simple first-order shear deformation theory (S-FSDT) for large deflection and buckling of orthotropic single-layered graphene sheets (SLGSs). The S-FSDT has many advantages compared to the classical plate theory (CPT) and conventional FSDT such as needless of shear correction factor, containing less number of unknowns than the existing FSDT and strong similarities with the CPT. Governing equations and boundary conditions are derived based on Hamilton’s principle using the nonlocal differential constitutive relations of Eringen and von Kármán geometrical model. Numerical results are obtained using differential quadrature (DQ) method and the Newton–Raphson iterative scheme. Finally, some comparison studies are carried out to show the high accuracy and reliability of the present formulations compared to the nonlocal CPT and FSDT for different thicknesses, elastic foundations and nonlocal parameters.

  3. Gyrokinetic Magnetohydrodynamics and the Associated Equilibrium

    NASA Astrophysics Data System (ADS)

    Lee, W. W.; Hudson, S. R.; Ma, C. H.

    2017-10-01

    A proposed scheme for the calculations of gyrokinetic MHD and its associated equilibrium is discussed related a recent paper on the subject. The scheme is based on the time-dependent gyrokinetic vorticity equation and parallel Ohm's law, as well as the associated gyrokinetic Ampere's law. This set of equations, in terms of the electrostatic potential, ϕ, and the vector potential, ϕ , supports both spatially varying perpendicular and parallel pressure gradients and their associated currents. The MHD equilibrium can be reached when ϕ -> 0 and A becomes constant in time, which, in turn, gives ∇ . (J|| +J⊥) = 0 and the associated magnetic islands. Examples in simple cylindrical geometry will be given. The present work is partially supported by US DoE Grant DE-AC02-09CH11466.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krempasky, J.; Flechsig, U.; Korhonen, T.

    Synchronous monochromator and insertion device energy scans were implemented at the Surfaces/Interfaces:Microscopy (SIM) beamline in order to provide the users fast X-ray magnetic dichroism studies (XMCD). A simple software control scheme is proposed based on a fast monochromator run-time energy readback which quickly updates the insertion device requested energy during an on-the-fly X-ray absorption scan (XAS). In this scheme the Plain Grating Monochromator (PGM) motion control, being much slower compared with the insertion device (APPLE-II type undulator), acts as a 'master' controlling the undulator 'slave' energy position. This master-slave software implementation exploits EPICS distributed device control over computer network andmore » allows for a quasi-synchronous motion control combined with data acquisition needed for the XAS or XMCD experiment.« less

  5. An accurate reactive power control study in virtual flux droop control

    NASA Astrophysics Data System (ADS)

    Wang, Aimeng; Zhang, Jia

    2017-12-01

    This paper investigates the problem of reactive power sharing based on virtual flux droop method. Firstly, flux droop control method is derived, where complicated multiple feedback loops and parameter regulation are avoided. Then, the reasons for inaccurate reactive power sharing are theoretically analyzed. Further, a novel reactive power control scheme is proposed which consists of three parts: compensation control, voltage recovery control and flux droop control. Finally, the proposed reactive power control strategy is verified in a simplified microgrid model with two parallel DGs. The simulation results show that the proposed control scheme can achieve accurate reactive power sharing and zero deviation of voltage. Meanwhile, it has some advantages of simple control and excellent dynamic and static performance.

  6. Analysis of a decision model in the context of equilibrium pricing and order book pricing

    NASA Astrophysics Data System (ADS)

    Wagner, D. C.; Schmitt, T. A.; Schäfer, R.; Guhr, T.; Wolf, D. E.

    2014-12-01

    An agent-based model for financial markets has to incorporate two aspects: decision making and price formation. We introduce a simple decision model and consider its implications in two different pricing schemes. First, we study its parameter dependence within a supply-demand balance setting. We find realistic behavior in a wide parameter range. Second, we embed our decision model in an order book setting. Here, we observe interesting features which are not present in the equilibrium pricing scheme. In particular, we find a nontrivial behavior of the order book volumes which reminds of a trend switching phenomenon. Thus, the decision making model alone does not realistically represent the trading and the stylized facts. The order book mechanism is crucial.

  7. Hybrid and Constrained Resolution-of-Identity Techniques for Coulomb Integrals.

    PubMed

    Duchemin, Ivan; Li, Jing; Blase, Xavier

    2017-03-14

    The introduction of auxiliary bases to approximate molecular orbital products has paved the way to significant savings in the evaluation of four-center two-electron Coulomb integrals. We present a generalized dual space strategy that sheds a new light on variants over the standard density and Coulomb-fitting schemes, including the possibility of introducing minimization constraints. We improve in particular the charge- or multipole-preserving strategies introduced respectively by Baerends and Van Alsenoy that we compare to a simple scheme where the Coulomb metric is used for lowest angular momentum auxiliary orbitals only. We explore the merits of these approaches on the basis of extensive Hartree-Fock and MP2 calculations over a standard set of medium size molecules.

  8. A simple level set method for solving Stefan problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S.; Merriman, B.; Osher, S.

    1997-07-15

    Discussed in this paper is an implicit finite difference scheme for solving a heat equation and a simple level set method for capturing the interface between solid and liquid phases which are used to solve Stefan problems.

  9. Harmonic generation with multiple wiggler schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonifacio, R.; De Salvo, L.; Pierini, P.

    1995-02-01

    In this paper the authors give a simple theoretical description of the basic physics of the single pass high gain free electron laser (FEL), describing in some detail the FEL bunching properties and the harmonic generation technique with a multiple-wiggler scheme or a high gain optical klystron configuration.

  10. System identification of propagating wave segments in excitable media and its application to advanced control

    NASA Astrophysics Data System (ADS)

    Katsumata, Hisatoshi; Konishi, Keiji; Hara, Naoyuki

    2018-04-01

    The present paper proposes a scheme for controlling wave segments in excitable media. This scheme consists of two phases: in the first phase, a simple mathematical model for wave segments is derived using only the time series data of input and output signals for the media; in the second phase, the model derived in the first phase is used in an advanced control technique. We demonstrate with numerical simulations of the Oregonator model that this scheme performs better than a conventional control scheme.

  11. Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping

    NASA Technical Reports Server (NTRS)

    Suresh, A.; Huynh, H. T.

    1997-01-01

    A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.

  12. Trinary signed-digit arithmetic using an efficient encoding scheme

    NASA Astrophysics Data System (ADS)

    Salim, W. Y.; Alam, M. S.; Fyath, R. S.; Ali, S. A.

    2000-09-01

    The trinary signed-digit (TSD) number system is of interest for ultrafast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.

  13. One-step trinary signed-digit arithmetic using an efficient encoding scheme

    NASA Astrophysics Data System (ADS)

    Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.

    2000-11-01

    The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.

  14. Simple measurement-based admission control for DiffServ access networks

    NASA Astrophysics Data System (ADS)

    Lakkakorpi, Jani

    2002-07-01

    In order to provide good Quality of Service (QoS) in a Differentiated Services (DiffServ) network, a dynamic admission control scheme is definitely needed as an alternative to overprovisioning. In this paper, we present a simple measurement-based admission control (MBAC) mechanism for DiffServ-based access networks. Instead of using active measurements only or doing purely static bookkeeping with parameter-based admission control (PBAC), the admission control decisions are based on bandwidth reservations and periodically measured & exponentially averaged link loads. If any link load on the path between two endpoints is over the applicable threshold, access is denied. Link loads are periodically sent to Bandwidth Broker (BB) of the routing domain, which makes the admission control decisions. The information needed in calculating the link loads is retrieved from the router statistics. The proposed admission control mechanism is verified through simulations. Our results prove that it is possible to achieve very high bottleneck link utilization levels and still maintain good QoS.

  15. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.

  16. A scheme based on ICD-10 diagnoses and drug prescriptions to stage chronic kidney disease severity in healthcare administrative records.

    PubMed

    Friberg, Leif; Gasparini, Alessandro; Carrero, Juan Jesus

    2018-04-01

    Information about renal function is important for drug safety studies using administrative health databases. However, serum creatinine values are seldom available in these registries. Our aim was to develop and test a simple scheme for stratification of renal function without access to laboratory test results. Our scheme uses registry data about diagnoses, contacts, dialysis and drug use. We validated the scheme in the Stockholm CREAtinine Measurements (SCREAM) project using information on approximately 1.1 million individuals residing in the Stockholm County who underwent calibrated creatinine testing during 2006-11, linked with data about health care contacts and filled drug prescriptions. Estimated glomerular filtration rate (eGFR) was calculated with the CKD-EPI formula and used as the gold standard for validation of the scheme. When the scheme classified patients as having eGFR <30 mL/min/1.73 m 2 , it was correct in 93.5% of cases. The specificity of the scheme was close to 100% in all age groups. The sensitivity was poor, ranging from 68.2% in the youngest age quartile, down to 10.7% in the oldest age quartile. Age-related decline in renal function makes a large proportion of elderly patients fall into the chronic kidney disease (CKD) range without receiving CKD diagnoses, as this often is seen as part of normal ageing. In the absence of renal function tests, our scheme may be of value for identifying patients with moderate and severe CKD on the basis of diagnostic and prescription data for use in studies of large healthcare databases.

  17. Stabilization and tracking control of X-Z inverted pendulum with sliding-mode control.

    PubMed

    Wang, Jia-Jun

    2012-11-01

    X-Z inverted pendulum is a new kind of inverted pendulum which can move with the combination of the vertical and horizontal forces. Through a new transformation, the X-Z inverted pendulum is decomposed into three simple models. Based on the simple models, sliding-mode control is applied to stabilization and tracking control of the inverted pendulum. The performance of the sliding mode control is compared with that of the PID control. Simulation results show that the design scheme of sliding-mode control is effective for the stabilization and tracking control of the X-Z inverted pendulum. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  18. A simple system for 160GHz optical terahertz wave generation and data modulation

    NASA Astrophysics Data System (ADS)

    Li, Yihan; He, Jingsuo; Sun, Xueming; Shi, Zexia; Wang, Ruike; Cui, Hailin; Su, Bo; Zhang, Cunlin

    2018-01-01

    A simple system based on two cascaded Mach-Zehnder modulators, which can generate 160GHz optical terahertz waves from 40GHz microwave sources, is simulated and tested in this paper. Fiber grating filter is used in the system to filter out optical carrier. By properly adjusting the modulator DC bias voltages and the signal voltages and phases, 4-tupling optical terahertz wave can be generated with fiber grating. This notch fiber grating filter is greatly suitable for terahertz over fiber (TOF) communication system. This scheme greatly reduces the cost of long-distance terahertz communication. Furthermore, 10Gbps digital signal is modulated in the 160GHz optical terahertz wave.

  19. A CLS-based survivable and energy-saving WDM-PON architecture

    NASA Astrophysics Data System (ADS)

    Zhu, Min; Zhong, Wen-De; Zhang, Zhenrong; Luan, Feng

    2013-11-01

    We propose and demonstrate an improved survivable and energy-saving WDM-PON with colorless ONUs. It incorporates both energy-saving and self-healing operations. A simple effective energy-saving scheme is proposed by including an energy-saving control unit in the OLT and a control unit at each ONU. The energy-saving scheme realizes both dozing and sleep (offline) modes, which greatly improves the energy-saving efficiency for WDM-PONs. An intelligent protection switching scheme is designed in the OLT, which can distinguish if an ONU is in dozing/sleep (offline) state or a fiber is faulty. Moreover, by monitoring the optical power of each channel on both working and protection paths, the OLT can know the connection status of every fiber path, thus facilitating an effective protection switching and a faster failure recovery. The improved WDM-PON architecture not only significantly reduces energy consumption, but also performs self-healing operation in practical operation scenarios. The scheme feasibility is experimentally verified with 10 Gbit/s downstream and 1.25 Gbit/s upstream transmissions. We also examine the energy-saving efficiency of our proposed energy-saving scheme by simulation, which reveals that energy saving mainly arises from the dozing mode, not from the sleep mode when the ONU is in the online state.

  20. Restoration of Wavelet-Compressed Images and Motion Imagery

    DTIC Science & Technology

    2004-01-01

    SECURITY CLASSIFICATION OF REPORT UNCLASSIFIED 18. SECURITY CLASSIFICATION OF THIS PAGE UNCLASSIFIED 19. SECURITY CLASSIFICATION...images is that they are global translates of each other, where 29 the global motion parameters are known. In a very simple sense , these five images form...Image Proc., vol. 1, Oct. 2001, pp. 185–188. [2] J. W. Woods and T. Naveen, “A filter based bit allocation scheme for subband compresion of HDTV,” IEEE

  1. High-throughput purification of recombinant proteins using self-cleaving intein tags.

    PubMed

    Coolbaugh, M J; Shakalli Tang, M J; Wood, D W

    2017-01-01

    High throughput methods for recombinant protein production using E. coli typically involve the use of affinity tags for simple purification of the protein of interest. One drawback of these techniques is the occasional need for tag removal before study, which can be hard to predict. In this work, we demonstrate two high throughput purification methods for untagged protein targets based on simple and cost-effective self-cleaving intein tags. Two model proteins, E. coli beta-galactosidase (βGal) and superfolder green fluorescent protein (sfGFP), were purified using self-cleaving versions of the conventional chitin-binding domain (CBD) affinity tag and the nonchromatographic elastin-like-polypeptide (ELP) precipitation tag in a 96-well filter plate format. Initial tests with shake flask cultures confirmed that the intein purification scheme could be scaled down, with >90% pure product generated in a single step using both methods. The scheme was then validated in a high throughput expression platform using 24-well plate cultures followed by purification in 96-well plates. For both tags and with both target proteins, the purified product was consistently obtained in a single-step, with low well-to-well and plate-to-plate variability. This simple method thus allows the reproducible production of highly pure untagged recombinant proteins in a convenient microtiter plate format. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Dual-beam laser autofocusing system based on liquid lens

    NASA Astrophysics Data System (ADS)

    Zhang, Fumin; Yao, Yannan; Qu, Xinghua; Zhang, Tong; Pei, Bing

    2017-02-01

    A dual-beam laser autofocusing system is designed in this paper. The autofocusing system is based on a liquid lens with less moving parts and fast response time, which makes the system simple, reliable, compact and fast. A novel scheme ;Time-sharing focus, fast conversion; is innovatively proposed. The scheme effectively solves the problem that the guiding laser and the working laser cannot focus at the same target point because of the existence of chromatic aberration. This scheme not only makes both guiding laser and working laser achieve optimal focusing in guiding stage and working stage respectively, but also greatly reduces the system complexity and simplifies the focusing process as well as makes autofocusing time of the working laser reduce to about 10 ms. In the distance range of 1 m to 30 m, the autofocusing spot size is kept under 4.3 mm at 30 m and just 0.18 mm at 1 m. The spot size is much less influenced by the target distance compared with the collimated laser with a micro divergence angle for its self-adaptivity. The dual-beam laser autofocusing system based on liquid lens is fully automatic, compact and efficient. It is fully meet the need of dynamicity and adaptivity and it will play an important role in a number of long-range control applications.

  3. 2D magnetotelluric inversion using reflection seismic images as constraints and application in the COSC project

    NASA Astrophysics Data System (ADS)

    Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.

    2017-04-01

    We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.

  4. Quantifying control effort of biological and technical movements: an information-entropy-based approach.

    PubMed

    Haeufle, D F B; Günther, M; Wunner, G; Schmitt, S

    2014-01-01

    In biomechanics and biorobotics, muscles are often associated with reduced movement control effort and simplified control compared to technical actuators. This is based on evidence that the nonlinear muscle properties positively influence movement control. It is, however, open how to quantify the simplicity aspect of control effort and compare it between systems. Physical measures, such as energy consumption, stability, or jerk, have already been applied to compare biological and technical systems. Here a physical measure of control effort based on information entropy is presented. The idea is that control is simpler if a specific movement is generated with less processed sensor information, depending on the control scheme and the physical properties of the systems being compared. By calculating the Shannon information entropy of all sensor signals required for control, an information cost function can be formulated allowing the comparison of models of biological and technical control systems. Exemplarily applied to (bio-)mechanical models of hopping, the method reveals that the required information for generating hopping with a muscle driven by a simple reflex control scheme is only I=32 bits versus I=660 bits with a DC motor and a proportional differential controller. This approach to quantifying control effort captures the simplicity of a control scheme and can be used to compare completely different actuators and control approaches.

  5. Colour image compression by grey to colour conversion

    NASA Astrophysics Data System (ADS)

    Drew, Mark S.; Finlayson, Graham D.; Jindal, Abhilash

    2011-03-01

    Instead of de-correlating image luminance from chrominance, some use has been made of using the correlation between the luminance component of an image and its chromatic components, or the correlation between colour components, for colour image compression. In one approach, the Green colour channel was taken as a base, and the other colour channels or their DCT subbands were approximated as polynomial functions of the base inside image windows. This paper points out that we can do better if we introduce an addressing scheme into the image description such that similar colours are grouped together spatially. With a Luminance component base, we test several colour spaces and rearrangement schemes, including segmentation. and settle on a log-geometric-mean colour space. Along with PSNR versus bits-per-pixel, we found that spatially-keyed s-CIELAB colour error better identifies problem regions. Instead of segmentation, we found that rearranging on sorted chromatic components has almost equal performance and better compression. Here, we sort on each of the chromatic components and separately encode windows of each. The result consists of the original greyscale plane plus the polynomial coefficients of windows of rearranged chromatic values, which are then quantized. The simplicity of the method produces a fast and simple scheme for colour image and video compression, with excellent results.

  6. Simple adaptive control system design for a quadrotor with an internal PFC

    NASA Astrophysics Data System (ADS)

    Mizumoto, Ikuro; Nakamura, Takuto; Kumon, Makoto; Takagi, Taro

    2014-12-01

    The paper deals with an adaptive control system design problem for a four rotor helicopter or quadrotor. A simple adaptive control design scheme with a parallel feedforward compensator (PFC) in the internal loop of the considered quadrotor will be proposed based on the backstepping strategy. As is well known, the backstepping control strategy is one of the advanced control strategy for nonlinear systems. However, the control algorithm will become complex if the system has higher order relative degrees. We will show that one can skip some design steps of the backstepping method by introducing a PFC in the inner loop of the considered quadrotor, so that the structure of the obtained controller will be simplified and a high gain based adaptive feedback control system will be designed. The effectiveness of the proposed method will be confirmed through numerical simulations.

  7. Comparison of Several Dissipation Algorithms for Central Difference Schemes

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Radespiel, R.; Turkel, E.

    1997-01-01

    Several algorithms for introducing artificial dissipation into a central difference approximation to the Euler and Navier Stokes equations are considered. The focus of the paper is on the convective upwind and split pressure (CUSP) scheme, which is designed to support single interior point discrete shock waves. This scheme is analyzed and compared in detail with scalar and matrix dissipation (MATD) schemes. Resolution capability is determined by solving subsonic, transonic, and hypersonic flow problems. A finite-volume discretization and a multistage time-stepping scheme with multigrid are used to compute solutions to the flow equations. Numerical results are also compared with either theoretical solutions or experimental data. For transonic airfoil flows the best accuracy on coarse meshes for aerodynamic coefficients is obtained with a simple MATD scheme.

  8. Two-species boson mixture on a ring: A group-theoretic approach to the quantum dynamics of low-energy excitations

    NASA Astrophysics Data System (ADS)

    Penna, Vittorio; Richaud, Andrea

    2017-11-01

    We investigate the weak excitations of a system made up of two condensates trapped in a Bose-Hubbard ring and coupled by an interspecies repulsive interaction. Our approach, based on the Bogoliubov approximation scheme, shows that one can reduce the problem Hamiltonian to the sum of sub-Hamiltonians Ĥk, each one associated to momentum modes ±k . Each Ĥk is then recognized to be an element of a dynamical algebra. This uncommon and remarkable property allows us to present a straightforward diagonalization scheme, to find constants of motion, to highlight the significant microscopic processes, and to compute their time evolution. The proposed solution scheme is applied to a simple but nontrivial closed circuit, the trimer. The dynamics of low-energy excitations, corresponding to weakly populated vortices, is investigated considering different choices of the initial conditions and the angular-momentum transfer between the two condensates is evidenced. Finally, the condition for which the spectral collapse and dynamical instability are observed is derived analytically.

  9. Improved opponent color local binary patterns: an effective local image descriptor for color texture classification

    NASA Astrophysics Data System (ADS)

    Bianconi, Francesco; Bello-Cerezo, Raquel; Napoletano, Paolo

    2018-01-01

    Texture classification plays a major role in many computer vision applications. Local binary patterns (LBP) encoding schemes have largely been proven to be very effective for this task. Improved LBP (ILBP) are conceptually simple, easy to implement, and highly effective LBP variants based on a point-to-average thresholding scheme instead of a point-to-point one. We propose the use of this encoding scheme for extracting intra- and interchannel features for color texture classification. We experimentally evaluated the resulting improved opponent color LBP alone and in concatenation with the ILBP of the local color contrast map on a set of image classification tasks over 9 datasets of generic color textures and 11 datasets of biomedical textures. The proposed approach outperformed other grayscale and color LBP variants in nearly all the datasets considered and proved competitive even against image features from last generation convolutional neural networks, particularly for the classification of biomedical images.

  10. Tangle-Free Finite Element Mesh Motion for Ablation Problems

    NASA Technical Reports Server (NTRS)

    Droba, Justin

    2016-01-01

    Mesh motion is the process by which a computational domain is updated in time to reflect physical changes in the material the domain represents. Such a technique is needed in the study of the thermal response of ablative materials, which erode when strong heating is applied to the boundary. Traditionally, the thermal solver is coupled with a linear elastic or biharmonic system whose sole purpose is to update mesh node locations in response to altering boundary heating. Simple mesh motion algorithms rely on boundary surface normals. In such schemes, evolution in time will eventually cause the mesh to intersect and "tangle" with itself, causing failure. Furthermore, such schemes are greatly limited in the problems geometries on which they will be successful. This paper presents a comprehensive and sophisticated scheme that tailors the directions of motion based on context. By choosing directions for each node smartly, the inevitable tangle can be completely avoided and mesh motion on complex geometries can be modeled accurately.

  11. DS-ARP: a new detection scheme for ARP spoofing attacks based on routing trace for ubiquitous environments.

    PubMed

    Song, Min Su; Lee, Jae Dong; Jeong, Young-Sik; Jeong, Hwa-Young; Park, Jong Hyuk

    2014-01-01

    Despite the convenience, ubiquitous computing suffers from many threats and security risks. Security considerations in the ubiquitous network are required to create enriched and more secure ubiquitous environments. The address resolution protocol (ARP) is a protocol used to identify the IP address and the physical address of the associated network card. ARP is designed to work without problems in general environments. However, since it does not include security measures against malicious attacks, in its design, an attacker can impersonate another host using ARP spoofing or access important information. In this paper, we propose a new detection scheme for ARP spoofing attacks using a routing trace, which can be used to protect the internal network. Tracing routing can find the change of network movement path. The proposed scheme provides high constancy and compatibility because it does not alter the ARP protocol. In addition, it is simple and stable, as it does not use a complex algorithm or impose extra load on the computer system.

  12. The large discretization step method for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  13. DS-ARP: A New Detection Scheme for ARP Spoofing Attacks Based on Routing Trace for Ubiquitous Environments

    PubMed Central

    Song, Min Su; Lee, Jae Dong; Jeong, Hwa-Young; Park, Jong Hyuk

    2014-01-01

    Despite the convenience, ubiquitous computing suffers from many threats and security risks. Security considerations in the ubiquitous network are required to create enriched and more secure ubiquitous environments. The address resolution protocol (ARP) is a protocol used to identify the IP address and the physical address of the associated network card. ARP is designed to work without problems in general environments. However, since it does not include security measures against malicious attacks, in its design, an attacker can impersonate another host using ARP spoofing or access important information. In this paper, we propose a new detection scheme for ARP spoofing attacks using a routing trace, which can be used to protect the internal network. Tracing routing can find the change of network movement path. The proposed scheme provides high constancy and compatibility because it does not alter the ARP protocol. In addition, it is simple and stable, as it does not use a complex algorithm or impose extra load on the computer system. PMID:25243205

  14. Joint Blind Source Separation by Multi-set Canonical Correlation Analysis

    PubMed Central

    Li, Yi-Ou; Adalı, Tülay; Wang, Wei; Calhoun, Vince D

    2009-01-01

    In this work, we introduce a simple and effective scheme to achieve joint blind source separation (BSS) of multiple datasets using multi-set canonical correlation analysis (M-CCA) [1]. We first propose a generative model of joint BSS based on the correlation of latent sources within and between datasets. We specify source separability conditions, and show that, when the conditions are satisfied, the group of corresponding sources from each dataset can be jointly extracted by M-CCA through maximization of correlation among the extracted sources. We compare source separation performance of the M-CCA scheme with other joint BSS methods and demonstrate the superior performance of the M-CCA scheme in achieving joint BSS for a large number of datasets, group of corresponding sources with heterogeneous correlation values, and complex-valued sources with circular and non-circular distributions. We apply M-CCA to analysis of functional magnetic resonance imaging (fMRI) data from multiple subjects and show its utility in estimating meaningful brain activations from a visuomotor task. PMID:20221319

  15. Security of fragile authentication watermarks with localization

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica

    2002-04-01

    In this paper, we study the security of fragile image authentication watermarks that can localize tampered areas. We start by comparing the goals, capabilities, and advantages of image authentication based on watermarking and cryptography. Then we point out some common security problems of current fragile authentication watermarks with localization and classify attacks on authentication watermarks into five categories. By investigating the attacks and vulnerabilities of current schemes, we propose a variation of the Wong scheme18 that is fast, simple, cryptographically secure, and resistant to all known attacks, including the Holliman-Memon attack9. In the new scheme, a special symmetry structure in the logo is used to authenticate the block content, while the logo itself carries information about the block origin (block index, the image index or time stamp, author ID, etc.). Because the authentication of the content and its origin are separated, it is possible to easily identify swapped blocks between images and accurately detect cropped areas, while being able to accurately localize tampered pixels.

  16. Channel Deviation-Based Power Control in Body Area Networks.

    PubMed

    Van, Son Dinh; Cotton, Simon L; Smith, David B

    2018-05-01

    Internet enabled body area networks (BANs) will form a core part of future remote health monitoring and ambient assisted living technology. In BAN applications, due to the dynamic nature of human activity, the off-body BAN channel can be prone to deep fading caused by body shadowing and multipath fading. Using this knowledge, we present some novel practical adaptive power control protocols based on the channel deviation to simultaneously prolong the lifetime of wearable devices and reduce outage probability. The proposed schemes are both flexible and relatively simple to implement on hardware platforms with constrained resources making them inherently suitable for BAN applications. We present the key algorithm parameters used to dynamically respond to the channel variation. This allows the algorithms to achieve a better energy efficiency and signal reliability in everyday usage scenarios such as those in which a person undertakes many different activities (e.g., sitting, walking, standing, etc.). We also profile their performance against traditional, optimal, and other existing schemes for which it is demonstrated that not only does the outage probability reduce significantly, but the proposed algorithms also save up to average transmit power compared to the competing schemes.

  17. A novel OCS millimeter-wave generation scheme with data carried only by one sideband and wavelength reuse for uplink connection

    NASA Astrophysics Data System (ADS)

    Zhu, Zihang; Zhao, Shanghong; Yao, Zhoushi; Tan, Qinggui; Li, Yongjun; Chu, Xingchun; Shi, Lei; Hou, Rui

    2012-11-01

    We propose a novel optical carrier suppression (OCS) millimeter-wave generation scheme with data carried only by one sideband using a dual-drive Mach-Zehnder modulator (MZM) in radio-over-fiber system, and the transmission performance is also investigated. As the signal is transmitted along the fiber, there is no time shifting of the codes caused by chromatic dispersion. Simulation results show that the eye diagram keeps open and clear even when the optical millimeter-waves are transmitted over 110 km and the power penalty is about 1.9 dB after fiber transmission distance of 60 km. Furthermore, due to the +1 order sideband carrying no data, a full duplex radio-over-fiber link based on wavelength reuse is also built to simplify the base station. The bidirectional 2.5 Gbit/s data is successfully transmitted over a 40 km standard single mode fiber with less than 0.8 dB power penalty in the simulation. Both theoretical analysis and simulation results show that our scheme is feasible and we can obtain a simple cost-efficient configuration and good performance over long-distance transmission.

  18. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  19. Enhancement of brain tumor MR images based on intuitionistic fuzzy sets

    NASA Astrophysics Data System (ADS)

    Deng, Wankai; Deng, He; Cheng, Lifang

    2015-12-01

    Brain tumor is one of the most fatal cancers, especially high-grade gliomas are among the most deadly. However, brain tumor MR images usually have the disadvantages of low resolution and contrast when compared with the optical images. Consequently, we present a novel adaptive intuitionistic fuzzy enhancement scheme by combining a nonlinear fuzzy filtering operation with fusion operators, for the enhancement of brain tumor MR images in this paper. The presented scheme consists of the following six steps: Firstly, the image is divided into several sub-images. Secondly, for each sub-image, object and background areas are separated by a simple threshold. Thirdly, respective intuitionistic fuzzy generators of object and background areas are constructed based on the modified restricted equivalence function. Fourthly, different suitable operations are performed on respective membership functions of object and background areas. Fifthly, the membership plane is inversely transformed into the image plane. Finally, an enhanced image is obtained through fusion operators. The comparison and evaluation of enhancement performance demonstrate that the presented scheme is helpful to determine the abnormal functional areas, guide the operation, judge the prognosis, and plan the radiotherapy by enhancing the fine detail of MR images.

  20. Approximate Expressions for the Period of a Simple Pendulum Using a Taylor Series Expansion

    ERIC Educational Resources Information Center

    Belendez, Augusto; Arribas, Enrique; Marquez, Andres; Ortuno, Manuel; Gallego, Sergi

    2011-01-01

    An approximate scheme for obtaining the period of a simple pendulum for large-amplitude oscillations is analysed and discussed. When students express the exact frequency or the period of a simple pendulum as a function of the oscillation amplitude, and they are told to expand this function in a Taylor series, they always do so using the…

  1. An Improved Transformation and Optimized Sampling Scheme for the Numerical Evaluation of Singular and Near-Singular Potentials

    NASA Technical Reports Server (NTRS)

    Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.

    2007-01-01

    Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.

  2. Low-Dispersion Scheme for Nonlinear Acoustic Waves in Nonuniform Flow

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Kaushik, Dinesh K.; Idres, Moumen

    1997-01-01

    The linear dispersion-relation-preserving scheme and its boundary conditions have been extended to the nonlinear Euler equations. This allowed computing, a nonuniform flowfield and a nonlinear acoustic wave propagation in such a medium, by the same scheme. By casting all the equations, boundary conditions, and the solution scheme in generalized curvilinear coordinates, the solutions were made possible for non-Cartesian domains and, for the better deployment of the grid points, nonuniform grid step sizes could be used. It has been tested for a number of simple initial-value and periodic-source problems. A simple demonstration of the difference between a linear and nonlinear propagation was conducted. The wall boundary condition, derived from the momentum equations and implemented through a pressure at a ghost point, and the radiation boundary condition, derived from the asymptotic solution to the Euler equations, have proven to be effective for the nonlinear equations and nonuniform flows. The nonreflective characteristic boundary conditions also have shown success but limited to the nonlinear waves in no mean flow, and failed for nonlinear waves in nonuniform flow.

  3. An Identity Based Key Exchange Protocol in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Molli, Venkateswara Rao; Tiwary, Omkar Nath

    2012-10-01

    Workflow systems often use delegation to enhance the flexibility of authorization; delegation transfers privileges among users across different administrative domains and facilitates information sharing. We present an independently verifiable delegation mechanism, where a delegation credential can be verified without the participation of domain administrators. This protocol, called role-based cascaded delegation (RBCD), supports simple and efficient cross-domain delegation of authority. RBCD enables a role member to create delegations based on the dynamic needs of collaboration; in the meantime, a delegation chain canbe verified by anyone without the participation of role administrators. We also propose the Measurable Risk Adaptive decentralized Role-based Delegation framework to address this problem. Describe an efficient realization of RBCD by using aggregate signatures, where the authentication information for an arbitrarily long role-based delegation chain is captured by one short signature of constant size. RBCD enables a role member to create delegations based on the need of collaboration; in the meantime anyone can verify a delegation chain without the participation of role administrators. The protocol is general and can be realized by any signature scheme. We have described a specific realization with a hierarchical certificate-based encryption scheme that gives delegation compact credentials.

  4. A new local-global approach for classification.

    PubMed

    Peres, R T; Pedreira, C E

    2010-09-01

    In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.

  5. Fractional order implementation of Integral Resonant Control - A nanopositioning application.

    PubMed

    San-Millan, Andres; Feliu-Batlle, Vicente; Aphale, Sumeet S

    2017-10-04

    By exploiting the co-located sensor-actuator arrangement in typical flexure-based piezoelectric stack actuated nanopositioners, the polezero interlacing exhibited by their axial frequency response can be transformed to a zero-pole interlacing by adding a constant feed-through term. The Integral Resonant Control (IRC) utilizes this unique property to add substantial damping to the dominant resonant mode by the use of a simple integrator implemented in closed loop. IRC used in conjunction with an integral tracking scheme, effectively reduces positioning errors introduced by modelling inaccuracies or parameter uncertainties. Over the past few years, successful application of the IRC control technique to nanopositioning systems has demonstrated performance robustness, easy tunability and versatility. The main drawback has been the relatively small positioning bandwidth achievable. This paper proposes a fractional order implementation of the classical integral tracking scheme employed in tandem with the IRC scheme to deliver damping and tracking. The fractional order integrator introduces an additional design parameter which allows desired pole-placement, resulting in superior closed loop bandwidth. Simulations and experimental results are presented to validate the theory. A 250% improvement in the achievable positioning bandwidth is observed with proposed fractional order scheme. Copyright © 2017. Published by Elsevier Ltd.

  6. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    PubMed

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  7. Thermodynamic Analysis of Chemically Reacting Mixtures-Comparison of First and Second Order Models.

    PubMed

    Pekař, Miloslav

    2018-01-01

    Recently, a method based on non-equilibrium continuum thermodynamics which derives thermodynamically consistent reaction rate models together with thermodynamic constraints on their parameters was analyzed using a triangular reaction scheme. The scheme was kinetically of the first order. Here, the analysis is further developed for several first and second order schemes to gain a deeper insight into the thermodynamic consistency of rate equations and relationships between chemical thermodynamic and kinetics. It is shown that the thermodynamic constraints on the so-called proper rate coefficient are usually simple sign restrictions consistent with the supposed reaction directions. Constraints on the so-called coupling rate coefficients are more complex and weaker. This means more freedom in kinetic coupling between reaction steps in a scheme, i.e., in the kinetic effects of other reactions on the rate of some reaction in a reacting system. When compared with traditional mass-action rate equations, the method allows a reduction in the number of traditional rate constants to be evaluated from data, i.e., a reduction in the dimensionality of the parameter estimation problem. This is due to identifying relationships between mass-action rate constants (relationships which also include thermodynamic equilibrium constants) which have so far been unknown.

  8. A simple, objective analysis scheme for scatterometer data. [Seasat A satellite observation of wind over ocean

    NASA Technical Reports Server (NTRS)

    Levy, G.; Brown, R. A.

    1986-01-01

    A simple economical objective analysis scheme is devised and tested on real scatterometer data. It is designed to treat dense data such as those of the Seasat A Satellite Scatterometer (SASS) for individual or multiple passes, and preserves subsynoptic scale features. Errors are evaluated with the aid of sampling ('bootstrap') statistical methods. In addition, sensitivity tests have been performed which establish qualitative confidence in calculated fields of divergence and vorticity. The SASS wind algorithm could be improved; however, the data at this point are limited by instrument errors rather than analysis errors. The analysis error is typically negligible in comparison with the instrument error, but amounts to 30 percent of the instrument error in areas of strong wind shear. The scheme is very economical, and thus suitable for large volumes of dense data such as SASS data.

  9. Comparison of two integration methods for dynamic causal modeling of electrophysiological data.

    PubMed

    Lemaréchal, Jean-Didier; George, Nathalie; David, Olivier

    2018-06-01

    Dynamic causal modeling (DCM) is a methodological approach to study effective connectivity among brain regions. Based on a set of observations and a biophysical model of brain interactions, DCM uses a Bayesian framework to estimate the posterior distribution of the free parameters of the model (e.g. modulation of connectivity) and infer architectural properties of the most plausible model (i.e. model selection). When modeling electrophysiological event-related responses, the estimation of the model relies on the integration of the system of delay differential equations (DDEs) that describe the dynamics of the system. In this technical note, we compared two numerical schemes for the integration of DDEs. The first, and standard, scheme approximates the DDEs (more precisely, the state of the system, with respect to conduction delays among brain regions) using ordinary differential equations (ODEs) and solves it with a fixed step size. The second scheme uses a dedicated DDEs solver with adaptive step sizes to control error, making it theoretically more accurate. To highlight the effects of the approximation used by the first integration scheme in regard to parameter estimation and Bayesian model selection, we performed simulations of local field potentials using first, a simple model comprising 2 regions and second, a more complex model comprising 6 regions. In these simulations, the second integration scheme served as the standard to which the first one was compared. Then, the performances of the two integration schemes were directly compared by fitting a public mismatch negativity EEG dataset with different models. The simulations revealed that the use of the standard DCM integration scheme was acceptable for Bayesian model selection but underestimated the connectivity parameters and did not allow an accurate estimation of conduction delays. Fitting to empirical data showed that the models systematically obtained an increased accuracy when using the second integration scheme. We conclude that inference on connectivity strength and delay based on DCM for EEG/MEG requires an accurate integration scheme. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  10. A hybrid quantum eraser scheme for characterization of free-space and fiber communication channels

    NASA Astrophysics Data System (ADS)

    Nape, Isaac; Kyeremah, Charlotte; Vallés, Adam; Rosales-Guzmán, Carmelo; Buah-Bassuah, Paul K.; Forbes, Andrew

    2018-02-01

    We demonstrate a simple projective measurement based on the quantum eraser concept that can be used to characterize the disturbances of any communication channel. Quantum erasers are commonly implemented as spatially separated path interferometric schemes. Here we exploit the advantages of redefining the which-path information in terms of spatial modes, replacing physical paths with abstract paths of orbital angular momentum (OAM). Remarkably, vector modes (natural modes of free-space and fiber) have a non-separable feature of spin-orbit coupled states, equivalent to the description of two independently marked paths. We explore the effects of fiber perturbations by probing a step-index optical fiber channel with a vector mode, relevant to high-order spatial mode encoding of information for ultra-fast fiber communications.

  11. On simulating flow with multiple time scales using a method of averages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolin, L.G.

    1997-12-31

    The author presents a new computational method based on averaging to efficiently simulate certain systems with multiple time scales. He first develops the method in a simple one-dimensional setting and employs linear stability analysis to demonstrate numerical stability. He then extends the method to multidimensional fluid flow. His method of averages does not depend on explicit splitting of the equations nor on modal decomposition. Rather he combines low order and high order algorithms in a generalized predictor-corrector framework. He illustrates the methodology in the context of a shallow fluid approximation to an ocean basin circulation. He finds that his newmore » method reproduces the accuracy of a fully explicit second-order accurate scheme, while costing less than a first-order accurate scheme.« less

  12. Controlling front-end electronics boards using commercial solutions

    NASA Astrophysics Data System (ADS)

    Beneyton, R.; Gaspar, C.; Jost, B.; Schmeling, S.

    2002-04-01

    LHCb is a dedicated B-physics experiment under construction at CERN's large hadron collider (LHC) accelerator. This paper will describe the novel approach LHCb is taking toward controlling and monitoring of electronics boards. Instead of using the bus in a crate to exercise control over the boards, we use credit-card sized personal computers (CCPCs) connected via Ethernet to cheap control PCs. The CCPCs will provide a simple parallel, I2C, and JTAG buses toward the electronics board. Each board will be equipped with a CCPC and, hence, will be completely independently controlled. The advantages of this scheme versus the traditional bus-based scheme will be described. Also, the integration of the controls of the electronics boards into a commercial supervisory control and data acquisition (SCADA) system will be shown.

  13. Thermal Conductivity of Single-Walled Carbon Nanotube with Internal Heat Source Studied by Molecular Dynamics Simulation

    NASA Astrophysics Data System (ADS)

    Li, Yuan-Wei; Cao, Bing-Yang

    2013-12-01

    The thermal conductivity of (5, 5) single-walled carbon nanotubes (SWNTs) with an internal heat source is investigated by using nonequilibrium molecular dynamics (NEMD) simulation incorporating uniform heat source and heat source-and-sink schemes. Compared with SWNTs without an internal heat source, i.e., by a fixed-temperature difference scheme, the thermal conductivity of SWNTs with an internal heat source is much lower, by as much as half in some cases, though it still increases with an increase of the tube length. Based on the theory of phonon dynamics, a function called the phonon free path distribution is defined to develop a simple one-dimensional heat conduction model considering an internal heat source, which can explain diffusive-ballistic heat transport in carbon nanotubes well.

  14. Beam alignment based on two-dimensional power spectral density of a near-field image.

    PubMed

    Wang, Shenzhen; Yuan, Qiang; Zeng, Fa; Zhang, Xin; Zhao, Junpu; Li, Kehong; Zhang, Xiaolu; Xue, Qiao; Yang, Ying; Dai, Wanjun; Zhou, Wei; Wang, Yuanchen; Zheng, Kuixing; Su, Jingqin; Hu, Dongxia; Zhu, Qihua

    2017-10-30

    Beam alignment is crucial to high-power laser facilities and is used to adjust the laser beams quickly and accurately to meet stringent requirements of pointing and centering. In this paper, a novel alignment method is presented, which employs data processing of the two-dimensional power spectral density (2D-PSD) for a near-field image and resolves the beam pointing error relative to the spatial filter pinhole directly. Combining this with a near-field fiducial mark, the operation of beam alignment is achieved. It is experimentally demonstrated that this scheme realizes a far-field alignment precision of approximately 3% of the pinhole size. This scheme adopts only one near-field camera to construct the alignment system, which provides a simple, efficient, and low-cost way to align lasers.

  15. A Charrelation Matrix-Based Blind Adaptive Detector for DS-CDMA Systems

    PubMed Central

    Luo, Zhongqiang; Zhu, Lidong

    2015-01-01

    In this paper, a blind adaptive detector is proposed for blind separation of user signals and blind estimation of spreading sequences in DS-CDMA systems. The blind separation scheme exploits a charrelation matrix for simple computation and effective extraction of information from observation signal samples. The system model of DS-CDMA signals is modeled as a blind separation framework. The unknown user information and spreading sequence of DS-CDMA systems can be estimated only from the sampled observation signals. Theoretical analysis and simulation results show that the improved performance of the proposed algorithm in comparison with the existing conventional algorithms used in DS-CDMA systems. Especially, the proposed scheme is suitable for when the number of observation samples is less and the signal to noise ratio (SNR) is low. PMID:26287209

  16. A Charrelation Matrix-Based Blind Adaptive Detector for DS-CDMA Systems.

    PubMed

    Luo, Zhongqiang; Zhu, Lidong

    2015-08-14

    In this paper, a blind adaptive detector is proposed for blind separation of user signals and blind estimation of spreading sequences in DS-CDMA systems. The blind separation scheme exploits a charrelation matrix for simple computation and effective extraction of information from observation signal samples. The system model of DS-CDMA signals is modeled as a blind separation framework. The unknown user information and spreading sequence of DS-CDMA systems can be estimated only from the sampled observation signals. Theoretical analysis and simulation results show that the improved performance of the proposed algorithm in comparison with the existing conventional algorithms used in DS-CDMA systems. Especially, the proposed scheme is suitable for when the number of observation samples is less and the signal to noise ratio (SNR) is low.

  17. Simulating transient dynamics of the time-dependent time fractional Fokker-Planck systems

    NASA Astrophysics Data System (ADS)

    Kang, Yan-Mei

    2016-09-01

    For a physically realistic type of time-dependent time fractional Fokker-Planck (FP) equation, derived as the continuous limit of the continuous time random walk with time-modulated Boltzmann jumping weight, a semi-analytic iteration scheme based on the truncated (generalized) Fourier series is presented to simulate the resultant transient dynamics when the external time modulation is a piece-wise constant signal. At first, the iteration scheme is demonstrated with a simple time-dependent time fractional FP equation on finite interval with two absorbing boundaries, and then it is generalized to the more general time-dependent Smoluchowski-type time fractional Fokker-Planck equation. The numerical examples verify the efficiency and accuracy of the iteration method, and some novel dynamical phenomena including polarized motion orientations and periodic response death are discussed.

  18. Clinical risk scoring for predicting non-alcoholic fatty liver disease in metabolic syndrome patients (NAFLD-MS score).

    PubMed

    Saokaew, Surasak; Kanchanasuwan, Shada; Apisarnthanarak, Piyaporn; Charoensak, Aphinya; Charatcharoenwitthaya, Phunchai; Phisalprapa, Pochamana; Chaiyakunapruk, Nathorn

    2017-10-01

    Non-alcoholic fatty liver disease (NAFLD) can progress from simple steatosis to hepatocellular carcinoma. None of tools have been developed specifically for high-risk patients. This study aimed to develop a simple risk scoring to predict NAFLD in patients with metabolic syndrome (MetS). A total of 509 patients with MetS were recruited. All were diagnosed by clinicians with ultrasonography-confirmed whether they were patients with NAFLD. Patients were randomly divided into derivation (n=400) and validation (n=109) cohort. To develop the risk score, clinical risk indicators measured at the time of recruitment were built by logistic regression. Regression coefficients were transformed into item scores and added up to a total score. A risk scoring scheme was developed from clinical predictors: BMI ≥25, AST/ALT ≥1, ALT ≥40, type 2 diabetes mellitus and central obesity. The scoring scheme was applied in validation cohort to test the performance. The scheme explained, by area under the receiver operating characteristic curve (AuROC), 76.8% of being NAFLD with good calibration (Hosmer-Lemeshow χ 2 =4.35; P=.629). The positive likelihood ratio of NAFLD in patients with low risk (scores below 3) and high risk (scores 5 and over) were 2.32 (95% CI: 1.90-2.82) and 7.77 (95% CI: 2.47-24.47) respectively. When applied in validation cohort, the score showed good performance with AuROC 76.7%, and illustrated 84%, and 100% certainty in low- and high-risk groups respectively. A simple and non-invasive scoring scheme of five predictors provides good prediction indices for NAFLD in MetS patients. This scheme may help clinicians in order to take further appropriate action. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Supporting the Virtual Soldier With a Physics-Based Software Architecture

    DTIC Science & Technology

    2005-06-01

    simple approach taken here). Rather, this paper demonstrates how existing solution schemes can rapidly expand; it embraces all theoretical solution... bodyj . In (5) the superscript ’T’ accompanying a vector denotes the transposition of the vector. The constraint force and moment are defined as F C=Z1 a a...FE codes as there are meshes, and the requested MD code. This is described next. Exactly how the PM instantiated each physics process became an issue

  20. Reply to Comment on ‘Authenticated quantum secret sharing with quantum dialogue based on Bell states'

    NASA Astrophysics Data System (ADS)

    Abulkasim, Hussein; Hamad, Safwat; Elhadad, Ahmed

    2018-02-01

    In the Comment made by Gao (2018 Phys. Scr. 93 027002), it has been shown that the multiparty case in our proposed scheme in Abulkasim et al (2016 Phys. Scr. 91 085101) is not secure, where Bob and Charlie can deduce Alice’s unitary operations without being detected. This reply shows a simple modification of the multiparty case to prevent the dishonest agents from performing this kind of attack.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chertkov, Michael; Turitsyn, Konstantin; Sulc, Petr

    The anticipated increase in the number of plug-in electric vehicles (EV) will put additional strain on electrical distribution circuits. Many control schemes have been proposed to control EV charging. Here, we develop control algorithms based on randomized EV charging start times and simple one-way broadcast communication allowing for a time delay between communication events. Using arguments from queuing theory and statistical analysis, we seek to maximize the utilization of excess distribution circuit capacity while keeping the probability of a circuit overload negligible.

  2. A Minimal Three-Dimensional Tropical Cyclone Model.

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyan; Smith, Roger K.; Ulrich, Wolfgang

    2001-07-01

    A minimal 3D numerical model designed for basic studies of tropical cyclone behavior is described. The model is formulated in coordinates on an f or plane and has three vertical levels, one characterizing a shallow boundary layer and the other two representing the upper and lower troposphere, respectively. It has three options for treating cumulus convection on the subgrid scale and a simple scheme for the explicit release of latent heat on the grid scale. The subgrid-scale schemes are based on the mass-flux models suggested by Arakawa and Ooyama in the late 1960s, but modified to include the effects of precipitation-cooled downdrafts. They differ from one another in the closure that determines the cloud-base mass flux. One closure is based on the assumption of boundary layer quasi-equilibrium proposed by Raymond and Emanuel.It is shown that a realistic hurricane-like vortex develops from a moderate strength initial vortex, even when the initial environment is slightly stable to deep convection. This is true for all three cumulus schemes as well as in the case where only the explicit release of latent heat is included. In all cases there is a period of gestation during which the boundary layer moisture in the inner core region increases on account of surface moisture fluxes, followed by a period of rapid deepening. Precipitation from the convection scheme dominates the explicit precipitation in the early stages of development, but this situation is reversed as the vortex matures. These findings are similar to those of Baik et al., who used the Betts-Miller parameterization scheme in an axisymmetric model with 11 levels in the vertical. The most striking difference between the model results using different convection schemes is the length of the gestation period, whereas the maximum intensity attained is similar for the three schemes. The calculations suggest the hypothesis that the period of rapid development in tropical cyclones is accompanied by a change in the character of deep convection in the inner core region from buoyantly driven, predominantly upright convection to slantwise forced moist ascent.

  3. Soil moisture downscaling using a simple thermal based proxy

    NASA Astrophysics Data System (ADS)

    Peng, Jian; Loew, Alexander; Niesel, Jonathan

    2016-04-01

    Microwave remote sensing has been largely applied to retrieve soil moisture (SM) from active and passive sensors. The obvious advantage of microwave sensor is that SM can be obtained regardless of atmospheric conditions. However, existing global SM products only provide observations at coarse spatial resolutions, which often hamper their applications in regional hydrological studies. Therefore, various downscaling methods have been proposed to enhance the spatial resolution of satellite soil moisture products. The aim of this study is to investigate the validity and robustness of a simple Vegetation Temperature Condition Index (VTCI) downscaling scheme over different climates and regions. Both polar orbiting (MODIS) and geostationary (MSG SEVIRI) satellite data are used to improve the spatial resolution of the European Space Agency's Water Cycle Multi-mission Observation Strategy and Climate Change Initiative (ESA CCI) soil moisture, which is a merged product based on both active and passive microwave observations. The results from direct validation against soil moisture in-situ measurements, spatial pattern comparison, as well as seasonal and land use analyses show that the downscaling method can significantly improve the spatial details of CCI soil moisture while maintain the accuracy of CCI soil moisture. The application of the scheme with different satellite platforms and over different regions further demonstrate the robustness and effectiveness of the proposed method. Therefore, the VTCI downscaling method has the potential to facilitate relevant hydrological applications that require high spatial and temporal resolution soil moisture.

  4. Robust PD Sway Control of a Lifted Load for a Crane Using a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Kawada, Kazuo; Sogo, Hiroyuki; Yamamoto, Toru; Mada, Yasuhiro

    PID control schemes still continue to be widely used for most industrial control systems. This is mainly because PID controllers have simple control structures, and are simple to maintain and tune. However, it is difficult to find a set of suitable control parameters in the case of time-varying and/or nonlinear systems. For such a problem, the robust controller has been proposed.Although it is important to choose the suitable nominal model in designing the robust controller, it is not usually easy.In this paper, a new robust PD controller design scheme is proposed, which utilizes a genetic algorithm.

  5. A simple scheme for magnetic balance in four-component relativistic Kohn-Sham calculations of nuclear magnetic resonance shielding constants in a Gaussian basis.

    PubMed

    Olejniczak, Małgorzata; Bast, Radovan; Saue, Trond; Pecul, Magdalena

    2012-01-07

    We report the implementation of nuclear magnetic resonance (NMR) shielding tensors within the four-component relativistic Kohn-Sham density functional theory including non-collinear spin magnetization and employing London atomic orbitals to ensure gauge origin independent results, together with a new and efficient scheme for assuring correct balance between the large and small components of a molecular four-component spinor in the presence of an external magnetic field (simple magnetic balance). To test our formalism we have carried out calculations of NMR shielding tensors for the HX series (X = F, Cl, Br, I, At), the Xe atom, and the Xe dimer. The advantage of simple magnetic balance scheme combined with the use of London atomic orbitals is the fast convergence of results (when compared with restricted kinetic balance) and elimination of linear dependencies in the basis set (when compared to unrestricted kinetic balance). The effect of including spin magnetization in the description of NMR shielding tensor has been found important for hydrogen atoms in heavy HX molecules, causing an increase of isotropic values of 10%, but negligible for heavy atoms.

  6. ULTRA-SHARP nonoscillatory convection schemes for high-speed steady multidimensional flow

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Mokhtari, Simin

    1990-01-01

    For convection-dominated flows, classical second-order methods are notoriously oscillatory and often unstable. For this reason, many computational fluid dynamicists have adopted various forms of (inherently stable) first-order upwinding over the past few decades. Although it is now well known that first-order convection schemes suffer from serious inaccuracies attributable to artificial viscosity or numerical diffusion under high convection conditions, these methods continue to enjoy widespread popularity for numerical heat transfer calculations, apparently due to a perceived lack of viable high accuracy alternatives. But alternatives are available. For example, nonoscillatory methods used in gasdynamics, including currently popular TVD schemes, can be easily adapted to multidimensional incompressible flow and convective transport. This, in itself, would be a major advance for numerical convective heat transfer, for example. But, as is shown, second-order TVD schemes form only a small, overly restrictive, subclass of a much more universal, and extremely simple, nonoscillatory flux-limiting strategy which can be applied to convection schemes of arbitrarily high order accuracy, while requiring only a simple tridiagonal ADI line-solver, as used in the majority of general purpose iterative codes for incompressible flow and numerical heat transfer. The new universal limiter and associated solution procedures form the so-called ULTRA-SHARP alternative for high resolution nonoscillatory multidimensional steady state high speed convective modelling.

  7. Classification scheme for phenomenological universalities in growth problems in physics and other sciences.

    PubMed

    Castorina, P; Delsanto, P P; Guiot, C

    2006-05-12

    A classification in universality classes of broad categories of phenomenologies, belonging to physics and other disciplines, may be very useful for a cross fertilization among them and for the purpose of pattern recognition and interpretation of experimental data. We present here a simple scheme for the classification of nonlinear growth problems. The success of the scheme in predicting and characterizing the well known Gompertz, West, and logistic models, suggests to us the study of a hitherto unexplored class of nonlinear growth problems.

  8. An improved lambda-scheme for one-dimensional flows

    NASA Technical Reports Server (NTRS)

    Moretti, G.; Dipiano, M. T.

    1983-01-01

    A code for the calculation of one-dimensional flows is presented, which combines a simple and efficient version of the lambda-scheme with tracking of discontinuities. The latter is needed to identify points where minor departures from the basic integration scheme are applied to prevent infiltration of numerical errors. Such a tracking is obtained via a systematic application of Boolean algebra. It is, therefore, very efficient. Fifteen examples are presented and discussed in detail. The results are exceptionally good. All discontinuites are captured within one mesh interval.

  9. A user authentication scheme using physiological and behavioral biometrics for multitouch devices.

    PubMed

    Koong, Chorng-Shiuh; Yang, Tzu-I; Tseng, Chien-Chao

    2014-01-01

    With the rapid growth of mobile network, tablets and smart phones have become sorts of keys to access personal secured services in our daily life. People use these devices to manage personal finances, shop on the Internet, and even pay at vending machines. Besides, it also helps us get connected with friends and business partners through social network applications, which were widely used as personal identifications in both real and virtual societies. However, these devices use inherently weak authentication mechanism, based upon passwords and PINs that is not changed all the time. Although forcing users to change password periodically can enhance the security level, it may also be considered annoyances for users. Biometric technologies are straightforward because of the simple authentication process. However, most of the traditional biometrics methodologies require diverse equipment to acquire biometric information, which may be expensive and not portable. This paper proposes a multibiometric user authentication scheme with both physiological and behavioral biometrics. Only simple rotations with fingers on multitouch devices are required to enhance the security level without annoyances for users. In addition, the user credential is replaceable to prevent from the privacy leakage.

  10. A User Authentication Scheme Using Physiological and Behavioral Biometrics for Multitouch Devices

    PubMed Central

    Koong, Chorng-Shiuh; Tseng, Chien-Chao

    2014-01-01

    With the rapid growth of mobile network, tablets and smart phones have become sorts of keys to access personal secured services in our daily life. People use these devices to manage personal finances, shop on the Internet, and even pay at vending machines. Besides, it also helps us get connected with friends and business partners through social network applications, which were widely used as personal identifications in both real and virtual societies. However, these devices use inherently weak authentication mechanism, based upon passwords and PINs that is not changed all the time. Although forcing users to change password periodically can enhance the security level, it may also be considered annoyances for users. Biometric technologies are straightforward because of the simple authentication process. However, most of the traditional biometrics methodologies require diverse equipment to acquire biometric information, which may be expensive and not portable. This paper proposes a multibiometric user authentication scheme with both physiological and behavioral biometrics. Only simple rotations with fingers on multitouch devices are required to enhance the security level without annoyances for users. In addition, the user credential is replaceable to prevent from the privacy leakage. PMID:25147864

  11. A SIMPLE, EFFICIENT SOLUTION OF FLUX-PROFILE RELATIONSHIPS IN THE ATMOSPHERIC SURFACE LAYER

    EPA Science Inventory

    This note describes a simple scheme for analytical estimation of the surface layer similarity functions from state variables. What distinguishes this note from the many previous papers on this topic is that this method is specifically targeted for numerical models where simplici...

  12. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  13. Evaluating statistical cloud schemes: What can we gain from ground-based remote sensing?

    NASA Astrophysics Data System (ADS)

    Grützun, V.; Quaas, J.; Morcrette, C. J.; Ament, F.

    2013-09-01

    Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based remote sensing such as lidar, microwave, and radar to evaluate prognostic distribution moments using the "perfect model approach." This means that we employ a high-resolution weather model as virtual reality and retrieve full three-dimensional atmospheric quantities and virtual ground-based observations. We then use statistics from the virtual observation to validate the modeled 3-D statistics. Since the data are entirely consistent, any discrepancy occurring is due to the method. Focusing on total water mixing ratio, we find that the mean ratio can be evaluated decently but that it strongly depends on the meteorological conditions as to whether the variance and skewness are reliable. Using some simple schematic description of different synoptic conditions, we show how statistics obtained from point or line measurements can be poor at representing the full three-dimensional distribution of water in the atmosphere. We argue that a careful analysis of measurement data and detailed knowledge of the meteorological situation is necessary to judge whether we can use the data for an evaluation of higher moments of the humidity distribution used by a statistical cloud scheme.

  14. Improving irrigation and groundwater parameterizations in the Community Land Model (CLM) using in-situ observations and satellite data

    NASA Astrophysics Data System (ADS)

    Felfelani, F.; Pokhrel, Y. N.

    2017-12-01

    In this study, we use in-situ observations and satellite data of soil moisture and groundwater to improve irrigation and groundwater parameterizations in the version 4.5 of the Community Land Model (CLM). The irrigation application trigger, which is based on the soil moisture deficit mechanism, is enhanced by integrating soil moisture observations and the data from the Soil Moisture Active Passive (SMAP) mission which is available since 2015. Further, we incorporate different irrigation application mechanisms based on schemes used in various other land surface models (LSMs) and carry out a sensitivity analysis using point simulations at two different irrigated sites in Mead, Nebraska where data from the AmeriFlux observational network are available. We then conduct regional simulations over the entire High Plains region and evaluate model results with the available irrigation water use data at the county-scale. Finally, we present results of groundwater simulations by implementing a simple pumping scheme based on our previous studies. Results from the implementation of current irrigation parameterization used in various LSMs show relatively large difference in vertical soil moisture content profile (e.g., 0.2 mm3/mm3) at point scale which is mostly decreased when averaged over relatively large regions (e.g., 0.04 mm3/mm3 in the High Plains region). It is found that original irrigation module in CLM 4.5 tends to overestimate the soil moisture content compared to both point observations and SMAP, and the results from the improved scheme linked with the groundwater pumping scheme show better agreement with the observations.

  15. Efficient DV-HOP Localization for Wireless Cyber-Physical Social Sensing System: A Correntropy-Based Neural Network Learning Scheme

    PubMed Central

    Xu, Yang; Luo, Xiong; Wang, Weiping; Zhao, Wenbing

    2017-01-01

    Integrating wireless sensor network (WSN) into the emerging computing paradigm, e.g., cyber-physical social sensing (CPSS), has witnessed a growing interest, and WSN can serve as a social network while receiving more attention from the social computing research field. Then, the localization of sensor nodes has become an essential requirement for many applications over WSN. Meanwhile, the localization information of unknown nodes has strongly affected the performance of WSN. The received signal strength indication (RSSI) as a typical range-based algorithm for positioning sensor nodes in WSN could achieve accurate location with hardware saving, but is sensitive to environmental noises. Moreover, the original distance vector hop (DV-HOP) as an important range-free localization algorithm is simple, inexpensive and not related to the environment factors, but performs poorly when lacking anchor nodes. Motivated by these, various improved DV-HOP schemes with RSSI have been introduced, and we present a new neural network (NN)-based node localization scheme, named RHOP-ELM-RCC, through the use of DV-HOP, RSSI and a regularized correntropy criterion (RCC)-based extreme learning machine (ELM) algorithm (ELM-RCC). Firstly, the proposed scheme employs both RSSI and DV-HOP to evaluate the distances between nodes to enhance the accuracy of distance estimation at a reasonable cost. Then, with the help of ELM featured with a fast learning speed with a good generalization performance and minimal human intervention, a single hidden layer feedforward network (SLFN) on the basis of ELM-RCC is used to implement the optimization task for obtaining the location of unknown nodes. Since the RSSI may be influenced by the environmental noises and may bring estimation error, the RCC instead of the mean square error (MSE) estimation, which is sensitive to noises, is exploited in ELM. Hence, it may make the estimation more robust against outliers. Additionally, the least square estimation (LSE) in ELM is replaced by the half-quadratic optimization technique. Simulation results show that our proposed scheme outperforms other traditional localization schemes. PMID:28085084

  16. Pseudorandom Noise Code-Based Technique for Thin Cloud Discrimination with CO2 and O2 Absorption Measurements

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Prasad, Narasimha S.; Flood, Michael A.

    2011-01-01

    NASA Langley Research Center is working on a continuous wave (CW) laser based remote sensing scheme for the detection of CO2 and O2 from space based platforms suitable for ACTIVE SENSING OF CO2 EMISSIONS OVER NIGHTS, DAYS, AND SEASONS (ASCENDS) mission. ASCENDS is a future space-based mission to determine the global distribution of sources and sinks of atmospheric carbon dioxide (CO2). A unique, multi-frequency, intensity modulated CW (IMCW) laser absorption spectrometer (LAS) operating at 1.57 micron for CO2 sensing has been developed. Effective aerosol and cloud discrimination techniques are being investigated in order to determine concentration values with accuracies less than 0.3%. In this paper, we discuss the demonstration of a pseudo noise (PN) code based technique for cloud and aerosol discrimination applications. The possibility of using maximum length (ML)-sequences for range and absorption measurements is investigated. A simple model for accomplishing this objective is formulated, Proof-of-concept experiments carried out using SONAR based LIDAR simulator that was built using simple audio hardware provided promising results for extension into optical wavelengths.

  17. Interface- and discontinuity-aware numerical schemes for plasma 3-T radiation diffusion in two and three dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, William W., E-mail: dai@lanl.gov; Scannapieco, Anthony J.

    2015-11-01

    A set of numerical schemes is developed for two- and three-dimensional time-dependent 3-T radiation diffusion equations in systems involving multi-materials. To resolve sub-cell structure, interface reconstruction is implemented within any cell that has more than one material. Therefore, the system of 3-T radiation diffusion equations is solved on two- and three-dimensional polyhedral meshes. The focus of the development is on the fully coupling between radiation and material, the treatment of nonlinearity in the equations, i.e., in the diffusion terms and source terms, treatment of the discontinuity across cell interfaces in material properties, the formulations for both transient and steady states,more » the property for large time steps, and second order accuracy in both space and time. The discontinuity of material properties between different materials is correctly treated based on the governing physics principle for general polyhedral meshes and full nonlinearity. The treatment is exact for arbitrarily strong discontinuity. The scheme is fully nonlinear for the full nonlinearity in the 3-T diffusion equations. Three temperatures are fully coupled and are updated simultaneously. The scheme is general in two and three dimensions on general polyhedral meshes. The features of the scheme are demonstrated through numerical examples for transient problems and steady states. The effects of some simplifications of numerical schemes are also shown through numerical examples, such as linearization, simple average of diffusion coefficient, and approximate treatment for the coupling between radiation and material.« less

  18. MeMoVolc report on classification and dynamics of volcanic explosive eruptions

    NASA Astrophysics Data System (ADS)

    Bonadonna, C.; Cioni, R.; Costa, A.; Druitt, T.; Phillips, J.; Pioli, L.; Andronico, D.; Harris, A.; Scollo, S.; Bachmann, O.; Bagheri, G.; Biass, S.; Brogi, F.; Cashman, K.; Dominguez, L.; Dürig, T.; Galland, O.; Giordano, G.; Gudmundsson, M.; Hort, M.; Höskuldsson, A.; Houghton, B.; Komorowski, J. C.; Küppers, U.; Lacanna, G.; Le Pennec, J. L.; Macedonio, G.; Manga, M.; Manzella, I.; Vitturi, M. de'Michieli; Neri, A.; Pistolesi, M.; Polacci, M.; Ripepe, M.; Rossi, E.; Scheu, B.; Sulpizio, R.; Tripoli, B.; Valade, S.; Valentine, G.; Vidal, C.; Wallenstein, N.

    2016-11-01

    Classifications of volcanic eruptions were first introduced in the early twentieth century mostly based on qualitative observations of eruptive activity, and over time, they have gradually been developed to incorporate more quantitative descriptions of the eruptive products from both deposits and observations of active volcanoes. Progress in physical volcanology, and increased capability in monitoring, measuring and modelling of explosive eruptions, has highlighted shortcomings in the way we classify eruptions and triggered a debate around the need for eruption classification and the advantages and disadvantages of existing classification schemes. Here, we (i) review and assess existing classification schemes, focussing on subaerial eruptions; (ii) summarize the fundamental processes that drive and parameters that characterize explosive volcanism; (iii) identify and prioritize the main research that will improve the understanding, characterization and classification of volcanic eruptions and (iv) provide a roadmap for producing a rational and comprehensive classification scheme. In particular, classification schemes need to be objective-driven and simple enough to permit scientific exchange and promote transfer of knowledge beyond the scientific community. Schemes should be comprehensive and encompass a variety of products, eruptive styles and processes, including for example, lava flows, pyroclastic density currents, gas emissions and cinder cone or caldera formation. Open questions, processes and parameters that need to be addressed and better characterized in order to develop more comprehensive classification schemes and to advance our understanding of volcanic eruptions include conduit processes and dynamics, abrupt transitions in eruption regime, unsteadiness, eruption energy and energy balance.

  19. A standard test case suite for two-dimensional linear transport on the sphere: results from a collection of state-of-the-art schemes

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.

    2013-09-01

    Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent, The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data-sets are made available to facilitate the process of model evaluation and scheme intercomparison.

  20. A standard test case suite for two-dimensional linear transport on the sphere: results from a collection of state-of-the-art schemes

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.

    2014-01-01

    Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent. The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data sets are made available to facilitate the process of model evaluation and scheme intercomparison.

  1. Robust and Simple Non-Reflecting Boundary Conditions for the Euler Equations: A New Approach Based on the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Himansu, Ananda; Loh, Ching-Yuen; Wang, Xiao-Yen; Yu, Shang-Tao

    2003-01-01

    This paper reports on a significant advance in the area of non-reflecting boundary conditions (NRBCs) for unsteady flow computations. As a part of the development of the space-time conservation element and solution element (CE/SE) method, sets of NRBCs for 1D Euler problems are developed without using any characteristics-based techniques. These conditions are much simpler than those commonly reported in the literature, yet so robust that they are applicable to subsonic, transonic and supersonic flows even in the presence of discontinuities. In addition, the straightforward multidimensional extensions of the present 1D NRBCs have been shown numerically to be equally simple and robust. The paper details the theoretical underpinning of these NRBCs, and explains their unique robustness and accuracy in terms of the conservation of space-time fluxes. Some numerical results for an extended Sod's shock-tube problem, illustrating the effectiveness of the present NRBCs are included, together with an associated simple Fortran computer program. As a preliminary to the present development, a review of the basic CE/SE schemes is also included.

  2. Flowfield computation of entry vehicles

    NASA Technical Reports Server (NTRS)

    Prabhu, Dinesh K.

    1990-01-01

    The equations governing the multidimensional flow of a reacting mixture of thermally perfect gasses were derived. The modeling procedures for the various terms of the conservation laws are discussed. A numerical algorithm, based on the finite-volume approach, to solve these conservation equations was developed. The advantages and disadvantages of the present numerical scheme are discussed from the point of view of accuracy, computer time, and memory requirements. A simple one-dimensional model problem was solved to prove the feasibility and accuracy of the algorithm. A computer code implementing the above algorithm was developed and is presently being applied to simple geometries and conditions. Once the code is completely debugged and validated, it will be used to compute the complete unsteady flow field around the Aeroassist Flight Experiment (AFE) body.

  3. A simple filter circuit for denoising biomechanical impact signals.

    PubMed

    Subramaniam, Suba R; Georgakis, Apostolos

    2009-01-01

    We present a simple scheme for denoising non-stationary biomechanical signals with the aim of accurately estimating their second derivative (acceleration). The method is based on filtering in fractional Fourier domains using well-known low-pass filters in a way that amounts to a time-varying cut-off threshold. The resulting algorithm is linear and its design is facilitated by the relationship between the fractional Fourier transform and joint time-frequency representations. The implemented filter circuit employs only three low-order filters while its efficiency is further supported by the low computational complexity of the fractional Fourier transform. The results demonstrate that the proposed method can denoise the signals effectively and is more robust against noise as compared to conventional low-pass filters.

  4. Simple adaptive control system design for a quadrotor with an internal PFC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mizumoto, Ikuro; Nakamura, Takuto; Kumon, Makoto

    2014-12-10

    The paper deals with an adaptive control system design problem for a four rotor helicopter or quadrotor. A simple adaptive control design scheme with a parallel feedforward compensator (PFC) in the internal loop of the considered quadrotor will be proposed based on the backstepping strategy. As is well known, the backstepping control strategy is one of the advanced control strategy for nonlinear systems. However, the control algorithm will become complex if the system has higher order relative degrees. We will show that one can skip some design steps of the backstepping method by introducing a PFC in the inner loopmore » of the considered quadrotor, so that the structure of the obtained controller will be simplified and a high gain based adaptive feedback control system will be designed. The effectiveness of the proposed method will be confirmed through numerical simulations.« less

  5. Announcement/Subscription/Publication: Message Based Communication for Heterogeneous Mobile Environments

    NASA Astrophysics Data System (ADS)

    Ristau, Henry

    Many tasks in smart environments can be implemented using message based communication paradigms that decouple applications in time, space, synchronization and semantics. Current solutions for decoupled message based communication either do not support message processing and thus semantic decoupling or rely on clearly defined network structures. In this paper we present ASP, a novel concept for such communication that can directly operate on neighbor relations between brokers and does not rely on a homogeneous addressing scheme or anymore than simple link layer communication. We show by simulation that ASP performs well in a heterogeneous scenario with mobile nodes and decreases network or processor load significantly compared to message flooding.

  6. Research on the thickness control method of workbench oil film based on theoretical model

    NASA Astrophysics Data System (ADS)

    Pei, Tang; Lin, Lin; Liu, Ge; Yu, Liping; Xu, Zhen; Zhao, Di

    2018-06-01

    To improve the thickness adjustability of the workbench oil film, we designed a software system to control the thickness of oil film based on the Siemens 840dsl CNC system and set up an experimental platform. A regulation scheme of oil film thickness based on theoretical model is proposed, the accuracy and feasibility of which is proved by experiment results. It's verified that the method mentioned above can meet the demands of workbench oil film thickness control, the experiment is simple and efficient with high control precision. Reliable theory support is supplied for the development of workbench oil film active control system as well.

  7. Numerical simulation of steady three-dimensional flows in axial turbomachinery bladerows

    NASA Astrophysics Data System (ADS)

    Basson, Anton Herman

    The formulation for and application of a numerical model for low Mach number steady three-dimensional flows in axial turbomachinery blade rows is presented. The formulation considered here includes an efficient grid generation scheme (particularly suited to computational grids for the analysis of turbulent turbomachinery flows) and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation, applicable to viscous and inviscid flows. The grid generation technique uses a combination of algebraic and elliptic methods, in conjunction with the Minimal Residual Method, to economically generate smooth structured grids. For typical H-grids in turbomachinery bladerows, when compared to a purely elliptic grid generation scheme, the presented grid generation scheme produces grids with much improved smoothness near the leading and trailing edges, allows the use of small near wall grid spacing required by low Reynolds number turbulence models, and maintains orthogonality of the grid near the solid boundaries even for high flow angle cascades. A specialized embedded H-grid for application particularly to tip clearance flows is presented. This topology smoothly discretizes the domain without modifying the tip shape, while requiring only minor modifications to H-grid flow solvers. Better quantitative modeling of the tip clearance vortex structure than that obtained with a pinched tip approximation is demonstrated. The formulation of artificial dissipation terms for a semi-implicit, pressure-based (SIMPLE type) flow solver, is presented. It is applied to both the Euler and the Navier-Stokes equations, expressed in generalized coordinates using a non-staggered grid. This formulation is compared to some SIMPLE and time marching formulations, revealing the artificial dissipation inherent in some commonly used semi-implicit formulations. The effect of the amount of dissipation on the accuracy of the solution and the convergence rate is quantitatively demonstrated for a number of flow cases. The ability of the formulation to model complex steady turbomachinery flows is demonstrated, e.g. for pressure driven secondary flows, turbine nozzle wakes, turbulent boundary layers. The formulation's modeling of blade surface heat transfer is assessed. The numerical model is used to investigate the structure of phenomena associated with tip clearance flows in a turbine nozzle.

  8. Simple adaptive control for quadcopters with saturated actuators

    NASA Astrophysics Data System (ADS)

    Borisov, Oleg I.; Bobtsov, Alexey A.; Pyrkin, Anton A.; Gromov, Vladislav S.

    2017-01-01

    The stabilization problem for quadcopters with saturated actuators is considered. A simple adaptive output control approach is proposed. The control law "consecutive compensator" is augmented with the auxiliary integral loop and anti-windup scheme. Efficiency of the obtained regulator was confirmed by simulation of the quadcopter control problem.

  9. A pilot evaluation of two G-seat cueing schemes

    NASA Technical Reports Server (NTRS)

    Showalter, T. W.

    1978-01-01

    A comparison was made of two contrasting G-seat cueing schemes. The G-seat, an aircraft simulation subsystem, creates aircraft acceleration cues via seat contour changes. Of the two cueing schemes tested, one was designed to create skin pressure cues and the other was designed to create body position cues. Each cueing scheme was tested and evaluated subjectively by five pilots regarding its ability to cue the appropriate accelerations in each of four simple maneuvers: a pullout, a pushover, an S-turn maneuver, and a thrusting maneuver. A divergence of pilot opinion occurred, revealing that the perception and acceptance of G-seat stimuli is a highly individualistic phenomena. The creation of one acceptable G-seat cueing scheme was, therefore, deemed to be quite difficult.

  10. Improving multivariate Horner schemes with Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.

    2013-11-01

    Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

  11. Quantum secret sharing using orthogonal multiqudit entangled states

    NASA Astrophysics Data System (ADS)

    Bai, Chen-Ming; Li, Zhi-Hui; Liu, Cheng-Ji; Li, Yong-Ming

    2017-12-01

    In this work, we investigate the distinguishability of orthogonal multiqudit entangled states under restricted local operations and classical communication. According to these properties, we propose a quantum secret sharing scheme to realize three types of access structures, i.e., the ( n, n)-threshold, the restricted (3, n)-threshold and restricted (4, n)-threshold schemes (called LOCC-QSS scheme). All cooperating players in the restricted threshold schemes are from two disjoint groups. In the proposed protocol, the participants use the computational basis measurement and classical communication to distinguish between those orthogonal states and reconstruct the original secret. Furthermore, we also analyze the security of our scheme in four primary quantum attacks and give a simple encoding method in order to better prevent the participant conspiracy attack.

  12. A double candidate survivable routing protocol for HAP network

    NASA Astrophysics Data System (ADS)

    He, Panfeng; Li, Chunyue; Ni, Shuyan

    2016-11-01

    To improve HAP network invulnerability, and at the same time considering the quasi-dynamic topology in HAP network, a simple and reliable routing protocol is proposed in the paper. The protocol firstly uses a double-candidate strategy for the next-node select to provide better robustness. Then during the maintenance stage, short hello packets instead of long routing packets are used only to check link connectivity in the quasi-dynamic HAP network. The route maintenance scheme based on short hello packets can greatly reduce link spending. Simulation results based on OPNET demonstrate the effectiveness of the proposed routing protocol.

  13. Infrared target recognition based on improved joint local ternary pattern

    NASA Astrophysics Data System (ADS)

    Sun, Junding; Wu, Xiaosheng

    2016-05-01

    This paper presents a simple, efficient, yet robust approach, named joint orthogonal combination of local ternary pattern, for automatic forward-looking infrared target recognition. It gives more advantages to describe the macroscopic textures and microscopic textures by fusing variety of scales than the traditional LBP-based methods. In addition, it can effectively reduce the feature dimensionality. Further, the rotation invariant and uniform scheme, the robust LTP, and soft concave-convex partition are introduced to enhance its discriminative power. Experimental results demonstrate that the proposed method can achieve competitive results compared with the state-of-the-art methods.

  14. Combustion Control System Design of Diesel Engine via ASPR based Output Feedback Control Strategy with a PFC

    NASA Astrophysics Data System (ADS)

    Mizumoto, Ikuro; Tsunematsu, Junpei; Fujii, Seiya

    2016-09-01

    In this paper, a design method of an output feedback control system with a simple feedforward input for a combustion model of diesel engine will be proposed based on the almost strictly positive real-ness (ASPR-ness) of the controlled system for a combustion control of diesel engines. A parallel feedforward compensator (PFC) design scheme which renders the resulting augmented controlled system ASPR will also be proposed in order to design a stable output feedback control system for the considered combustion model. The effectiveness of our proposed method will be confirmed through numerical simulations.

  15. Achieving bifunctional cloak via combination of passive and active schemes

    NASA Astrophysics Data System (ADS)

    Lan, Chuwen; Bi, Ke; Gao, Zehua; Li, Bo; Zhou, Ji

    2016-11-01

    In this study, a simple and delicate approach to realizing manipulation of multi-physics field simultaneously through combination of passive and active schemes is proposed. In the design, one physical field is manipulated with passive scheme while the other with active scheme. As a proof of this concept, a bifunctional device is designed and fabricated to behave as electric and thermal invisibility cloak simultaneously. It is found that the experimental results are consistent with the simulated ones well, confirming the feasibility of our method. Furthermore, the proposed method could also be extended to other multi-physics fields, which might lead to potential applications in thermal, electric, and acoustic areas.

  16. Study on Manipulations of Fluids in Micro-scale and Their Applications in Physical, Bio/chemistry

    NASA Astrophysics Data System (ADS)

    Zhou, Bingpu

    Microfluidics is a highly interdisciplinary research field which manipulates, controls and analyzes fluids in micro-scale for physical and bio/chemical applications. In this thesis, several aspects of fluid manipulations in micro-scale were studied, discussed and employed for demonstrations of practical utilizations. To begin with, mixing in continuous flow microfluidic was raised and investigated. A simple method for mixing actuation based on magnetism was proposed and realized via integration of magnetically functionalized micropillar arrays inside the microfluidic channel.With such technique, microfluidic mixing could be swiftly switched on and off via simple application or retraction of the magnetic field. Thereafter, in Chapter 3 we mainly focused on how to establish stable while tunable concentration gradients inside microfluidic network using a simple design. The proposed scheme could also be modified with on-chip pneumatic actuated valve to realize pulsatile/temporal concentration gradients simultaneously in ten microfluidic branches. We further applied such methodology to obtain roughness gradients onPolydimethylsiloxane (PDMS) surface via combinations of the microfluidic network andphoto-polymerizations. The obtained materials were utilized in parallel cell culture to figure out the relationship between substrate morphologies and the cell behaviors. In the second part of this work, we emphasized on manipulations on microdroplets insidethe microfluidic channel and explored related applications in bio/chemical aspects. Firstly, microdroplet-based microfluidic universal logic gates were successfully demonstrated vialiquid-electronic hybrid divider. For application based on such novel scheme of control lable droplet generation, on-demand chemical reaction within paired microdroplets was presented using IF logic gate. Followed by this, another important operation of microdroplet - splitting -was investigated. Addition lateral continuous flow was applied at the bifurcation as a mediumto controllably divide microdroplets with highly tunable splitting ratios. Related physical mechanism was proposed and such approach was adopted further for rapid synthesis of multi-scale microspheres.

  17. Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.

    PubMed

    Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N

    2017-05-01

    This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.

  18. Studying the precision of ray tracing techniques with Szekeres models

    NASA Astrophysics Data System (ADS)

    Koksbang, S. M.; Hannestad, S.

    2015-07-01

    The simplest standard ray tracing scheme employing the Born and Limber approximations and neglecting lens-lens coupling is used for computing the convergence along individual rays in mock N-body data based on Szekeres swiss cheese and onion models. The results are compared with the exact convergence computed using the exact Szekeres metric combined with the Sachs formalism. A comparison is also made with an extension of the simple ray tracing scheme which includes the Doppler convergence. The exact convergence is reproduced very precisely as the sum of the gravitational and Doppler convergences along rays in Lemaitre-Tolman-Bondi swiss cheese and single void models. This is not the case when the swiss cheese models are based on nonsymmetric Szekeres models. For such models, there is a significant deviation between the exact and ray traced paths and hence also the corresponding convergences. There is also a clear deviation between the exact and ray tracing results obtained when studying both nonsymmetric and spherically symmetric Szekeres onion models.

  19. Fourier-interpolation superresolution optical fluctuation imaging (fSOFi) (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Enderlein, Joerg; Stein, Simon C.; Huss, Anja; Hähnel, Dirk; Gregor, Ingo

    2016-02-01

    Stochastic Optical Fluctuation Imaging (SOFI) is a superresolution fluorescence microscopy technique which allows to enhance the spatial resolution of an image by evaluating the temporal fluctuations of blinking fluorescent emitters. SOFI is not based on the identification and localization of single molecules such as in the widely used Photoactivation Localization Microsopy (PALM) or Stochastic Optical Reconstruction Microscopy (STORM), but computes a superresolved image via temporal cumulants from a recorded movie. A technical challenge hereby is that, when directly applying the SOFI algorithm to a movie of raw images, the pixel size of the final SOFI image is the same as that of the original images, which becomes problematic when the final SOFI resolution is much smaller than this value. In the past, sophisticated cross-correlation schemes have been used for tackling this problem. Here, we present an alternative, exact, straightforward, and simple solution using an interpolation scheme based on Fourier transforms. We exemplify the method on simulated and experimental data.

  20. A fiber orientation-adapted integration scheme for computing the hyperelastic Tucker average for short fiber reinforced composites

    NASA Astrophysics Data System (ADS)

    Goldberg, Niels; Ospald, Felix; Schneider, Matti

    2017-10-01

    In this article we introduce a fiber orientation-adapted integration scheme for Tucker's orientation averaging procedure applied to non-linear material laws, based on angular central Gaussian fiber orientation distributions. This method is stable w.r.t. fiber orientations degenerating into planar states and enables the construction of orthotropic hyperelastic energies for truly orthotropic fiber orientation states. We establish a reference scenario for fitting the Tucker average of a transversely isotropic hyperelastic energy, corresponding to a uni-directional fiber orientation, to microstructural simulations, obtained by FFT-based computational homogenization of neo-Hookean constituents. We carefully discuss ideas for accelerating the identification process, leading to a tremendous speed-up compared to a naive approach. The resulting hyperelastic material map turns out to be surprisingly accurate, simple to integrate in commercial finite element codes and fast in its execution. We demonstrate the capabilities of the extracted model by a finite element analysis of a fiber reinforced chain link.

  1. New perspectives in face correlation: discrimination enhancement in face recognition based on iterative algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Alfalou, A.; Brosseau, C.

    2016-04-01

    Here, we report a brief review on the recent developments of correlation algorithms. Several implementation schemes and specific applications proposed in recent years are also given to illustrate powerful applications of these methods. Following a discussion and comparison of the implementation of these schemes, we believe that all-numerical implementation is the most practical choice for application of the correlation method because the advantages of optical processing cannot compensate the technical and/or financial cost needed for an optical implementation platform. We also present a simple iterative algorithm to optimize the training images of composite correlation filters. By making use of three or four iterations, the peak-to-correlation energy (PCE) value of correlation plane can be significantly enhanced. A simulation test using the Pointing Head Pose Image Database (PHPID) illustrates the effectiveness of this statement. Our method can be applied in many composite filters based on linear composition of training images as an optimization means.

  2. An investigation of error characteristics and coding performance

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1992-01-01

    The performance of forward error correcting coding schemes on errors anticipated for the Earth Observation System (EOS) Ku-band downlink are studied. The EOS transmits picture frame data to the ground via the Telemetry Data Relay Satellite System (TDRSS) to a ground-based receiver at White Sands. Due to unintentional RF interference from other systems operating in the Ku band, the noise at the receiver is non-Gaussian which may result in non-random errors output by the demodulator. That is, the downlink channel cannot be modeled by a simple memoryless Gaussian-noise channel. From previous experience, it is believed that those errors are bursty. The research proceeded by developing a computer based simulation, called Communication Link Error ANalysis (CLEAN), to model the downlink errors, forward error correcting schemes, and interleavers used with TDRSS. To date, the bulk of CLEAN was written, documented, debugged, and verified. The procedures for utilizing CLEAN to investigate code performance were established and are discussed.

  3. Factorized Runge-Kutta-Chebyshev Methods

    NASA Astrophysics Data System (ADS)

    O'Sullivan, Stephen

    2017-05-01

    The second-order extended stability Factorized Runge-Kutta-Chebyshev (FRKC2) explicit schemes for the integration of large systems of PDEs with diffusive terms are presented. The schemes are simple to implement through ordered sequences of forward Euler steps with complex stepsizes, and easily parallelised for large scale problems on distributed architectures. Preserving 7 digits for accuracy at 16 digit precision, the schemes are theoretically capable of maintaining internal stability for acceleration factors in excess of 6000 with respect to standard explicit Runge-Kutta methods. The extent of the stability domain is approximately the same as that of RKC schemes, and a third longer than in the case of RKL2 schemes. Extension of FRKC methods to fourth-order, by both complex splitting and Butcher composition techniques, is also discussed. A publicly available implementation of FRKC2 schemes may be obtained from maths.dit.ie/frkc

  4. Method and apparatus for configuration control of redundant robots

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1991-01-01

    A method and apparatus to control a robot or manipulator configuration over the entire motion based on augmentation of the manipulator forward kinematics is disclosed. A set of kinematic functions is defined in Cartesian or joint space to reflect the desirable configuration that will be achieved in addition to the specified end-effector motion. The user-defined kinematic functions and the end-effector Cartesian coordinates are combined to form a set of task-related configuration variables as generalized coordinates for the manipulator. A task-based adaptive scheme is then utilized to directly control the configuration variables so as to achieve tracking of some desired reference trajectories throughout the robot motion. This accomplishes the basic task of desired end-effector motion, while utilizing the redundancy to achieve any additional task through the desired time variation of the kinematic functions. The present invention can also be used for optimization of any kinematic objective function, or for satisfaction of a set of kinematic inequality constraints, as in an obstacle avoidance problem. In contrast to pseudoinverse-based methods, the configuration control scheme ensures cyclic motion of the manipulator, which is an essential requirement for repetitive operations. The control law is simple and computationally very fast, and does not require either the complex manipulator dynamic model or the complicated inverse kinematic transformation. The configuration control scheme can alternatively be implemented in joint space.

  5. Radar-derived quantitative precipitation estimation in complex terrain over the eastern Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Gou, Yabin; Ma, Yingzhao; Chen, Haonan; Wen, Yixin

    2018-05-01

    Quantitative precipitation estimation (QPE) is one of the important applications of weather radars. However, in complex terrain such as Tibetan Plateau, it is a challenging task to obtain an optimal Z-R relation due to the complex spatial and temporal variability in precipitation microphysics. This paper develops two radar QPE schemes respectively based on Reflectivity Threshold (RT) and Storm Cell Identification and Tracking (SCIT) algorithms using observations from 11 Doppler weather radars and 3264 rain gauges over the Eastern Tibetan Plateau (ETP). These two QPE methodologies are evaluated extensively using four precipitation events that are characterized by different meteorological features. Precipitation characteristics of independent storm cells associated with these four events, as well as the storm-scale differences, are investigated using short-term vertical profile of reflectivity (VPR) clusters. Evaluation results show that the SCIT-based rainfall approach performs better than the simple RT-based method for all precipitation events in terms of score comparison using validation gauge measurements as references. It is also found that the SCIT-based approach can effectively mitigate the local error of radar QPE and represent the precipitation spatiotemporal variability better than the RT-based scheme.

  6. Psychophysical experiments on the PicHunter image retrieval system

    NASA Astrophysics Data System (ADS)

    Papathomas, Thomas V.; Cox, Ingemar J.; Yianilos, Peter N.; Miller, Matt L.; Minka, Thomas P.; Conway, Tiffany E.; Ghosn, Joumana

    2001-01-01

    Psychophysical experiments were conducted on PicHunter, a content-based image retrieval (CBIR) experimental prototype with the following properties: (1) Based on a model of how users respond, it uses Bayes's rule to predict what target users want, given their actions. (2) It possesses an extremely simple user interface. (3) It employs an entropy- based scheme to improve convergence. (4) It introduces a paradigm for assessing the performance of CBIR systems. Experiments 1-3 studied human judgment of image similarity to obtain data for the model. Experiment 4 studied the importance of using: (a) semantic information, (b) memory of earlier input, and (c) relative and absolute judgments of similarity. Experiment 5 tested an approach that we propose for comparing performances of CBIR systems objectively. Finally, experiment 6 evaluated the most informative display-updating scheme that is based on entropy minimization, and confirmed earlier simulation results. These experiments represent one of the first attempts to quantify CBIR performance based on psychophysical studies, and they provide valuable data for improving CBIR algorithms. Even though they were designed with PicHunter in mind, their results can be applied to any CBIR system and, more generally, to any system that involves judgment of image similarity by humans.

  7. Simulating Self-Assembly with Simple Models

    NASA Astrophysics Data System (ADS)

    Rapaport, D. C.

    Results from recent molecular dynamics simulations of virus capsid self-assembly are described. The model is based on rigid trapezoidal particles designed to form polyhedral shells of size 60, together with an atomistic solvent. The underlying bonding process is fully reversible. More extensive computations are required than in previous work on icosahedral shells built from triangular particles, but the outcome is a high yield of closed shells. Intermediate clusters have a variety of forms, and bond counts provide a useful classification scheme

  8. An efficient transport solver for tokamak plasmas

    DOE PAGES

    Park, Jin Myung; Murakami, Masanori; St. John, H. E.; ...

    2017-01-03

    A simple approach to efficiently solve a coupled set of 1-D diffusion-type transport equations with a stiff transport model for tokamak plasmas is presented based on the 4th order accurate Interpolated Differential Operator scheme along with a nonlinear iteration method derived from a root-finding algorithm. Here, numerical tests using the Trapped Gyro-Landau-Fluid model show that the presented high order method provides an accurate transport solution using a small number of grid points with robust nonlinear convergence.

  9. On a Non-Reflecting Boundary Condition for Hyperbolic Conservation Laws

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.

    2003-01-01

    A non-reflecting boundary condition (NRBC) for practical computations in fluid dynamics and aeroacoustics is presented. The technique is based on the first principle of non-reflecting, plane wave propagation and the hyperbolicity of the Euler equation system. The NRBC is simple and effective, provided the numerical scheme maintains locally a C(sup 1) continuous solution at the boundary. Several numerical examples in 1D, 2D, and 3D space are illustrated to demonstrate its robustness in practical computations.

  10. Transferable and flexible label-like macromolecular memory on arbitrary substrates with high performance and a facile methodology.

    PubMed

    Lai, Ying-Chih; Hsu, Fang-Chi; Chen, Jian-Yu; He, Jr-Hau; Chang, Ting-Chang; Hsieh, Ya-Ping; Lin, Tai-Yuan; Yang, Ying-Jay; Chen, Yang-Fang

    2013-05-21

    A newly designed transferable and flexible label-like organic memory based on a graphene electrode behaves like a sticker, and can be readily placed on desired substrates or devices for diversified purposes. The memory label reveals excellent performance despite its physical presentation. This may greatly extend the memory applications in various advanced electronics and provide a simple scheme to integrate with other electronics. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Absolute Bunch Length Measurements at the ALS by Incoherent Synchrotron Radiation Fluctuation Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippetto, D.; /Frascati; Sannibale, F.

    2008-01-24

    By analyzing the pulse to pulse intensity fluctuations of the radiation emitted by a charge particle in the incoherent part of the spectrum, it is possible to extract information about the spatial distribution of the beam. At the Advanced Light Source (ALS) of the Lawrence Berkeley National Laboratory, we have developed and tested a simple scheme based on this principle that allows for the absolute measurement of the bunch length. A description of the method and the experimental results are presented.

  12. Architectures and algorithms for digital image processing; Proceedings of the Meeting, Cannes, France, December 5, 6, 1985

    NASA Technical Reports Server (NTRS)

    Duff, Michael J. B. (Editor); Siegel, Howard J. (Editor); Corbett, Francis J. (Editor)

    1986-01-01

    The conference presents papers on the architectures, algorithms, and applications of image processing. Particular attention is given to a very large scale integration system for image reconstruction from projections, a prebuffer algorithm for instant display of volume data, and an adaptive image sequence filtering scheme based on motion detection. Papers are also presented on a simple, direct practical method of sensing local motion and analyzing local optical flow, image matching techniques, and an automated biological dosimetry system.

  13. Simple proof of the quantum benchmark fidelity for continuous-variable quantum devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Namiki, Ryo

    2011-04-15

    An experimental success criterion for continuous-variable quantum teleportation and memory is to surpass the limit of the average fidelity achieved by classical measure-and-prepare schemes with respect to a Gaussian-distributed set of coherent states. We present an alternative proof of the classical limit based on the familiar notions of state-channel duality and partial transposition. The present method enables us to produce a quantum-domain criterion associated with a given set of measured fidelities.

  14. A scheme for parameterizing ice cloud water content in general circulation models

    NASA Technical Reports Server (NTRS)

    Heymsfield, Andrew J.; Donner, Leo J.

    1989-01-01

    A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.

  15. Extremely simple holographic projection of color images

    NASA Astrophysics Data System (ADS)

    Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej

    2012-03-01

    A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).

  16. Design of an FPGA-based electronic flow regulator (EFR) for spacecraft propulsion system

    NASA Astrophysics Data System (ADS)

    Manikandan, J.; Jayaraman, M.; Jayachandran, M.

    2011-02-01

    This paper describes a scheme for electronically regulating the flow of propellant to the thruster from a high-pressure storage tank used in spacecraft application. Precise flow delivery of propellant to thrusters ensures propulsion system operation at best efficiency by maximizing the propellant and power utilization for the mission. The proposed field programmable gate array (FPGA) based electronic flow regulator (EFR) is used to ensure precise flow of propellant to the thrusters from a high-pressure storage tank used in spacecraft application. This paper presents hardware and software design of electronic flow regulator and implementation of the regulation logic onto an FPGA.Motivation for proposed FPGA-based electronic flow regulation is on the disadvantages of conventional approach of using analog circuits. Digital flow regulation overcomes the analog equivalent as digital circuits are highly flexible, are not much affected due to noise, accurate performance is repeatable, interface is easier to computers, storing facilities are possible and finally failure rate of digital circuits is less. FPGA has certain advantages over ASIC and microprocessor/micro-controller that motivated us to opt for FPGA-based electronic flow regulator. Also the control algorithm being software, it is well modifiable without changing the hardware. This scheme is simple enough to adopt for a wide range of applications, where the flow is to be regulated for efficient operation.The proposed scheme is based on a space-qualified re-configurable field programmable gate arrays (FPGA) and hybrid micro circuit (HMC). A graphical user interface (GUI) based application software is also developed for debugging, monitoring and controlling the electronic flow regulator from PC COM port.

  17. A three dimensional immersed smoothed finite element method (3D IS-FEM) for fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi-Qian; Liu, G. R.; Khoo, Boo Cheong

    2013-02-01

    A three-dimensional immersed smoothed finite element method (3D IS-FEM) using four-node tetrahedral element is proposed to solve 3D fluid-structure interaction (FSI) problems. The 3D IS-FEM is able to determine accurately the physical deformation of the nonlinear solids placed within the incompressible viscous fluid governed by Navier-Stokes equations. The method employs the semi-implicit characteristic-based split scheme to solve the fluid flows and smoothed finite element methods to calculate the transient dynamics responses of the nonlinear solids based on explicit time integration. To impose the FSI conditions, a novel, effective and sufficiently general technique via simple linear interpolation is presented based on Lagrangian fictitious fluid meshes coinciding with the moving and deforming solid meshes. In the comparisons to the referenced works including experiments, it is clear that the proposed 3D IS-FEM ensures stability of the scheme with the second order spatial convergence property; and the IS-FEM is fairly independent of a wide range of mesh size ratio.

  18. Modeling and Detection of Ice Particle Accretion in Aircraft Engine Compression Systems

    NASA Technical Reports Server (NTRS)

    May, Ryan D.; Simon, Donald L.; Guo, Ten-Huei

    2012-01-01

    The accretion of ice particles in the core of commercial aircraft engines has been an ongoing aviation safety challenge. While no accidents have resulted from this phenomenon to date, numerous engine power loss events ranging from uneventful recoveries to forced landings have been recorded. As a first step to enabling mitigation strategies during ice accretion, a detection scheme must be developed that is capable of being implemented on board modern engines. In this paper, a simple detection scheme is developed and tested using a realistic engine simulation with approximate ice accretion models based on data from a compressor design tool. These accretion models are implemented as modified Low Pressure Compressor maps and have the capability to shift engine performance based on a specified level of ice blockage. Based on results from this model, it is possible to detect the accretion of ice in the engine core by observing shifts in the typical sensed engine outputs. Results are presented in which, for a 0.1 percent false positive rate, a true positive detection rate of 98 percent is achieved.

  19. On base station cooperation using statistical CSI in jointly correlated MIMO downlink channels

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Jiang, Bin; Jin, Shi; Gao, Xiqi; Wong, Kai-Kit

    2012-12-01

    This article studies the transmission of a single cell-edge user's signal using statistical channel state information at cooperative base stations (BSs) with a general jointly correlated multiple-input multiple-output (MIMO) channel model. We first present an optimal scheme to maximize the ergodic sum capacity with per-BS power constraints, revealing that the transmitted signals of all BSs are mutually independent and the optimum transmit directions for each BS align with the eigenvectors of the BS's own transmit correlation matrix of the channel. Then, we employ matrix permanents to derive a closed-form tight upper bound for the ergodic sum capacity. Based on these results, we develop a low-complexity power allocation solution using convex optimization techniques and a simple iterative water-filling algorithm (IWFA) for power allocation. Finally, we derive a necessary and sufficient condition for which a beamforming approach achieves capacity for all BSs. Simulation results demonstrate that the upper bound of ergodic sum capacity is tight and the proposed cooperative transmission scheme increases the downlink system sum capacity considerably.

  20. Entropy-guided switching trimmed mean deviation-boosted anisotropic diffusion filter

    NASA Astrophysics Data System (ADS)

    Nnolim, Uche A.

    2016-07-01

    An effective anisotropic diffusion (AD) mean filter variant is proposed for filtering of salt-and-pepper impulse noise. The implemented filter is robust to impulse noise ranging from low to high density levels. The algorithm involves a switching scheme in addition to utilizing the unsymmetric trimmed mean/median deviation to filter image noise while greatly preserving image edges, regardless of impulse noise density (ND). It operates with threshold parameters selected manually or adaptively estimated from the image statistics. It is further combined with the partial differential equations (PDE)-based AD for edge preservation at high NDs to enhance the properties of the trimmed mean filter. Based on experimental results, the proposed filter easily and consistently outperforms the median filter and its other variants ranging from simple to complex filter structures, especially the known PDE-based variants. In addition, the switching scheme and threshold calculation enables the filter to avoid smoothing an uncorrupted image, and filtering is activated only when impulse noise is present. Ultimately, the particular properties of the filter make its combination with the AD algorithm a unique and powerful edge-preservation smoothing filter at high-impulse NDs.

  1. Robust high-performance control for robotic manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1989-01-01

    A robust control scheme to accomplish accurate trajectory tracking for an integrated system of manipulator-plus-actuators is proposed. The control scheme comprises a feedforward and a feedback controller. The feedforward controller contains any known part of the manipulator dynamics that can be used for online control. The feedback controller consists of adaptive position and velocity feedback gains and an auxiliary signal which is simply generated by a fixed-gain proportional/integral/derivative controller. The feedback controller is updated by very simple adaptation laws which contain both proportional and integral adaptation terms. By introduction of a simple sigma modification to the adaptation laws, robustness is guaranteed in the presence of unmodeled dynamics and disturbances.

  2. Computational method for the correction of proximity effect in electron-beam lithography (Poster Paper)

    NASA Astrophysics Data System (ADS)

    Chang, Chih-Yuan; Owen, Gerry; Pease, Roger Fabian W.; Kailath, Thomas

    1992-07-01

    Dose correction is commonly used to compensate for the proximity effect in electron lithography. The computation of the required dose modulation is usually carried out using 'self-consistent' algorithms that work by solving a large number of simultaneous linear equations. However, there are two major drawbacks: the resulting correction is not exact, and the computation time is excessively long. A computational scheme, as shown in Figure 1, has been devised to eliminate this problem by the deconvolution of the point spread function in the pattern domain. The method is iterative, based on a steepest descent algorithm. The scheme has been successfully tested on a simple pattern with a minimum feature size 0.5 micrometers , exposed on a MEBES tool at 10 KeV in 0.2 micrometers of PMMA resist on a silicon substrate.

  3. Approximating the linear quadratic optimal control law for hereditary systems with delays in the control

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1987-01-01

    The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary systems. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.

  4. Four-body correlation embedded in antisymmetrized geminal power wave function.

    PubMed

    Kawasaki, Airi; Sugino, Osamu

    2016-12-28

    We extend the Coleman's antisymmetrized geminal power (AGP) to develop a wave function theory that can incorporate up to four-body correlation in a region of strong correlation. To facilitate the variational determination of the wave function, the total energy is rewritten in terms of the traces of geminals. This novel trace formula is applied to a simple model system consisting of one dimensional Hubbard ring with a site of strong correlation. Our scheme significantly improves the result obtained by the AGP-configuration interaction scheme of Uemura et al. and also achieves more efficient compression of the degrees of freedom of the wave function. We regard the result as a step toward a first-principles wave function theory for a strongly correlated point defect or adsorbate embedded in an AGP-based mean-field medium.

  5. Proposed data compression schemes for the Galileo S-band contingency mission

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Tong, Kevin

    1993-01-01

    The Galileo spacecraft is currently on its way to Jupiter and its moons. In April 1991, the high gain antenna (HGA) failed to deploy as commanded. In case the current efforts to deploy the HGA fails, communications during the Jupiter encounters will be through one of two low gain antenna (LGA) on an S-band (2.3 GHz) carrier. A lot of effort has been and will be conducted to attempt to open the HGA. Also various options for improving Galileo's telemetry downlink performance are being evaluated in the event that the HGA will not open at Jupiter arrival. Among all viable options the most promising and powerful one is to perform image and non-image data compression in software onboard the spacecraft. This involves in-flight re-programming of the existing flight software of Galileo's Command and Data Subsystem processors and Attitude and Articulation Control System (AACS) processor, which have very limited computational and memory resources. In this article we describe the proposed data compression algorithms and give their respective compression performance. The planned image compression algorithm is a 4 x 4 or an 8 x 8 multiplication-free integer cosine transform (ICT) scheme, which can be viewed as an integer approximation of the popular discrete cosine transform (DCT) scheme. The implementation complexity of the ICT schemes is much lower than the DCT-based schemes, yet the performances of the two algorithms are indistinguishable. The proposed non-image compression algorith is a Lempel-Ziv-Welch (LZW) variant, which is a lossless universal compression algorithm based on a dynamic dictionary lookup table. We developed a simple and efficient hashing function to perform the string search.

  6. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  7. An up-link power control for demand assignment International Business Satellite Communications Network

    NASA Astrophysics Data System (ADS)

    Nohara, Mitsuo; Takeuchi, Yoshio; Takahata, Fumio

    Up-link power control (UPC) is one of the essential technologies to provide efficient satellite communication systems operated at frequency bands above 10 GHz. A simple and cost-effective UPC scheme applicable to a demand assignment international business satellite communications system has been developed. This paper presents the UPC scheme, including the hardware implementation and its performance.

  8. Investigation of the particle-core structure of odd-mass nuclei in the NpNn scheme

    NASA Astrophysics Data System (ADS)

    Bucurescu, D.; Cata, G.; Cutoiu, D.; Dragulescu, E.; Ivasu, M.; Zamfir, N. V.; Gizon, A.; Gizon, J.

    1989-10-01

    The NpNn scheme is applied to data related to collective band structures determined by the unique parity shell model orbitals in odd-A nuclei from the mass regions A≌80-100 and A≌130. Simple systematics are obtained which give a synthetic picture of the evolution of the particle-core coupling in these nuclear regions.

  9. Proposing a new iterative learning control algorithm based on a non-linear least square formulation - Minimising draw-in errors

    NASA Astrophysics Data System (ADS)

    Endelt, B.

    2017-09-01

    Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.

  10. A note on quantum teleportation without the Bell-state measurement in superconducting qubits

    NASA Astrophysics Data System (ADS)

    Gomes, R. M.; Cardoso, W. B.; Avelar, A. T.; Baseia, B.

    2014-02-01

    In this paper, we offer a simple scheme to teleport a quantum state from a superconducting qubit to another spatially separated qubit, both coupled to coplanar waveguide microwave resonator. In this scheme the Bell-state measurement is not necessary, which simplifies the experimental observation. We revisit the effective model that describes such a coupled system and present the teleportation scheme with 98.7% of fidelity and 25% of success probability. We also verify the feasibility of this protocol for the transmon qubit parameters.

  11. On a two-pass scheme without a faraday mirror for free-space relativistic quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kravtsov, K. S.; Radchenko, I. V.; Korol'kov, A. V.

    2013-05-15

    The stability of destructive interference independent of the input polarization and the state of a quantum communication channel in fiber optic systems used in quantum cryptography plays a principal role in providing the security of communicated keys. A novel optical scheme is proposed that can be used both in relativistic quantum cryptography for communicating keys in open space and for communicating them over fiber optic lines. The scheme ensures stability of destructive interference and admits simple automatic balancing of a fiber interferometer.

  12. WEB-DHM: A distributed biosphere hydrological model developed by coupling a simple biosphere scheme with a hillslope hydrological model

    USDA-ARS?s Scientific Manuscript database

    The coupling of land surface models and hydrological models potentially improves the land surface representation, benefiting both the streamflow prediction capabilities as well as providing improved estimates of water and energy fluxes into the atmosphere. In this study, the simple biosphere model 2...

  13. A high-order Lagrangian-decoupling method for the incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Ho, Lee-Wing; Maday, Yvon; Patera, Anthony T.; Ronquist, Einar M.

    1989-01-01

    A high-order Lagrangian-decoupling method is presented for the unsteady convection-diffusion and incompressible Navier-Stokes equations. The method is based upon: (1) Lagrangian variational forms that reduce the convection-diffusion equation to a symmetric initial value problem; (2) implicit high-order backward-differentiation finite-difference schemes for integration along characteristics; (3) finite element or spectral element spatial discretizations; and (4) mesh-invariance procedures and high-order explicit time-stepping schemes for deducing function values at convected space-time points. The method improves upon previous finite element characteristic methods through the systematic and efficient extension to high order accuracy, and the introduction of a simple structure-preserving characteristic-foot calculation procedure which is readily implemented on modern architectures. The new method is significantly more efficient than explicit-convection schemes for the Navier-Stokes equations due to the decoupling of the convection and Stokes operators and the attendant increase in temporal stability. Numerous numerical examples are given for the convection-diffusion and Navier-Stokes equations for the particular case of a spectral element spatial discretization.

  14. Bio-inspired adaptive feedback error learning architecture for motor control.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo

    2012-10-01

    This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).

  15. Antibiotic removal from water: A highly efficient silver phosphate-based Z-scheme photocatalytic system under natural solar light.

    PubMed

    Wang, Jiajia; Chen, Hui; Tang, Lin; Zeng, Guangming; Liu, Yutang; Yan, Ming; Deng, Yaocheng; Feng, Haopeng; Yu, Jiangfang; Wang, Longlu

    2018-10-15

    Photocatalytic degradation is an alternative method to remove pharmaceutical compounds from water, however it is hard to achieve efficient rate because of the low efficiency of photocatalysts. In this study, an efficient Z-Scheme photocatalyst was constructed by integrating graphitic carbon nitride (CN) and reduced graphene oxide (rGO) with AP via a simple facile precipitation method. Excitedly, ternary AP/rGO/CN composite showed superior photocatalytic and anti-photocorrosion performances under both intense sunlight and weak indoor light irradiation. NOF can be completely degraded in only 30 min and about 85% of NOF can be mineralized after 2 h irradiation under intensive sunlight irradiation. rGO could work not only as a sheltering layer to protect AP from photocorrosion but also as a mediator for Z-Scheme electron transport, which can protect AP from the photoreduction. This strategy could be a promising method to construct photocatalytic system with high efficiency for the removal of antibiotics under natural light irradiation. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Spacecraft-charging mitigation of a high-power electron beam emitted by a magnetospheric spacecraft: Simple theoretical model for the transient of the spacecraft potential

    DOE PAGES

    Castello, Federico Lucco; Delzanno, Gian Luca; Borovsky, Joseph E.; ...

    2018-05-29

    A spacecraft-charging mitigation scheme necessary for the operation of a high-power electron beam in the low-density magnetosphere is analyzed. The scheme is based on a plasma contactor, i.e. a high-density charge-neutral plasma emitted prior to and during beam emission, and its ability to emit high ion currents without strong space-charge limitations. A simple theoretical model for the transient of the spacecraft potential and contactor expansion during beam emission is presented. The model focuses on the contactor ion dynamics and is valid in the limit when the ion contactor current is equal to the beam current. The model is found inmore » very good agreement with Particle-In-Cell simulations over a large parametric study that varies the initial expansion time of the contactor, the contactor current and the ion mass. The model highlights the physics of the spacecraft-charging mitigation scheme, indicating that the most important part of the dynamics is the evolution of the outermost ion front which is pushed away by the charge accumulated in the system by the beam. The model can be also used to estimate the long-time evolution of the spacecraft potential. For a short contactor expansion (0.3 or 0.6 ms Helium plasma or 0.8 ms Argon plasma, both with 1 mA current) it yields a peak spacecraft potential of the order of 1-3 kV. This implies that a 1-mA relativistic electron beam would be easily emitted by the spacecraft.« less

  17. Non-Markovian properties and multiscale hidden Markovian network buried in single molecule time series

    NASA Astrophysics Data System (ADS)

    Sultana, Tahmina; Takagi, Hiroaki; Morimatsu, Miki; Teramoto, Hiroshi; Li, Chun-Biu; Sako, Yasushi; Komatsuzaki, Tamiki

    2013-12-01

    We present a novel scheme to extract a multiscale state space network (SSN) from single-molecule time series. The multiscale SSN is a type of hidden Markov model that takes into account both multiple states buried in the measurement and memory effects in the process of the observable whenever they exist. Most biological systems function in a nonstationary manner across multiple timescales. Combined with a recently established nonlinear time series analysis based on information theory, a simple scheme is proposed to deal with the properties of multiscale and nonstationarity for a discrete time series. We derived an explicit analytical expression of the autocorrelation function in terms of the SSN. To demonstrate the potential of our scheme, we investigated single-molecule time series of dissociation and association kinetics between epidermal growth factor receptor (EGFR) on the plasma membrane and its adaptor protein Ash/Grb2 (Grb2) in an in vitro reconstituted system. We found that our formula successfully reproduces their autocorrelation function for a wide range of timescales (up to 3 s), and the underlying SSNs change their topographical structure as a function of the timescale; while the corresponding SSN is simple at the short timescale (0.033-0.1 s), the SSN at the longer timescales (0.1 s to ˜3 s) becomes rather complex in order to capture multiscale nonstationary kinetics emerging at longer timescales. It is also found that visiting the unbound form of the EGFR-Grb2 system approximately resets all information of history or memory of the process.

  18. Spacecraft-charging mitigation of a high-power electron beam emitted by a magnetospheric spacecraft: Simple theoretical model for the transient of the spacecraft potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castello, Federico Lucco; Delzanno, Gian Luca; Borovsky, Joseph E.

    A spacecraft-charging mitigation scheme necessary for the operation of a high-power electron beam in the low-density magnetosphere is analyzed. The scheme is based on a plasma contactor, i.e. a high-density charge-neutral plasma emitted prior to and during beam emission, and its ability to emit high ion currents without strong space-charge limitations. A simple theoretical model for the transient of the spacecraft potential and contactor expansion during beam emission is presented. The model focuses on the contactor ion dynamics and is valid in the limit when the ion contactor current is equal to the beam current. The model is found inmore » very good agreement with Particle-In-Cell simulations over a large parametric study that varies the initial expansion time of the contactor, the contactor current and the ion mass. The model highlights the physics of the spacecraft-charging mitigation scheme, indicating that the most important part of the dynamics is the evolution of the outermost ion front which is pushed away by the charge accumulated in the system by the beam. The model can be also used to estimate the long-time evolution of the spacecraft potential. For a short contactor expansion (0.3 or 0.6 ms Helium plasma or 0.8 ms Argon plasma, both with 1 mA current) it yields a peak spacecraft potential of the order of 1-3 kV. This implies that a 1-mA relativistic electron beam would be easily emitted by the spacecraft.« less

  19. A factorial assessment of the sensitivity of the BATS land-surface parameterization scheme. [BATS (Biosphere-Atmosphere Transfer Scheme)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henderson-Sellers, A.

    Land-surface schemes developed for incorporation into global climate models include parameterizations that are not yet fully validated and depend upon the specification of a large (20-50) number of ecological and soil parameters, the values of which are not yet well known. There are two methods of investigating the sensitivity of a land-surface scheme to prescribed values: simple one-at-a-time changes or factorial experiments. Factorial experiments offer information about interactions between parameters and are thus a more powerful tool. Here the results of a suite of factorial experiments are reported. These are designed (i) to illustrate the usefulness of this methodology andmore » (ii) to identify factors important to the performance of complex land-surface schemes. The Biosphere-Atmosphere Transfer Scheme (BATS) is used and its sensitivity is considered (a) to prescribed ecological and soil parameters and (b) to atmospheric forcing used in the off-line tests undertaken. Results indicate that the most important atmospheric forcings are mean monthly temperature and the interaction between mean monthly temperature and total monthly precipitation, although fractional cloudiness and other parameters are also important. The most important ecological parameters are vegetation roughness length, soil porosity, and a factor describing the sensitivity of the stomatal resistance of vegetation to the amount of photosynthetically active solar radiation and, to a lesser extent, soil and vegetation albedos. Two-factor interactions including vegetation roughness length are more important than many of the 23 specified single factors. The results of factorial sensitivity experiments such as these could form the basis for intercomparison of land-surface parameterization schemes and for field experiments and satellite-based observation programs aimed at improving evaluation of important parameters.« less

  20. On Space-Time Inversion Invariance and its Relation to Non-Dissipatedness of a CESE Core Scheme

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    2006-01-01

    The core motivating ideas of the space-time CESE method are clearly presented and critically analyzed. It is explained why these ideas result in all the simplifying and enabling features of the CESE method. A thorough discussion of the a scheme, a two-level non-dissipative CESE solver of a simple advection equation with two independent mesh variables and two equations per mesh point is also presented. It is shown that the scheme possesses some rather intriguing properties such as: (i) its two independent mesh variables separately satisfy two decoupled three-level leapfrog schemes and (ii) it shares with the leapfrog scheme the same amplification factors, even though the a scheme and the leapfrog scheme have completely different origins and structures. It is also explained why the leapfrog scheme is not as robust as the a scheme. The amplification factors/matrices of several non-dissipative schemes are carefully studied and the key properties that contribute to their non-dissipatedness are clearly spelled out. Finally we define and establish space-time inversion (STI) invariance for several non-dissipative schemes and show that their non-dissipatedness is a result of their STI invariance.

  1. The terminator "toy" chemistry test: A simple tool to assess errors in transport schemes

    DOE PAGES

    Lauritzen, P. H.; Conley, A. J.; Lamarque, J. -F.; ...

    2015-05-04

    This test extends the evaluation of transport schemes from prescribed advection of inert scalars to reactive species. The test consists of transporting two interacting chemical species in the Nair and Lauritzen 2-D idealized flow field. The sources and sinks for these two species are given by a simple, but non-linear, "toy" chemistry that represents combination (X+X → X 2) and dissociation (X 2 → X+X). This chemistry mimics photolysis-driven conditions near the solar terminator, where strong gradients in the spatial distribution of the species develop near its edge. Despite the large spatial variations in each species, the weighted sum Xmore » T = X+2X 2 should always be preserved at spatial scales at which molecular diffusion is excluded. The terminator test demonstrates how well the advection–transport scheme preserves linear correlations. Chemistry–transport (physics–dynamics) coupling can also be studied with this test. Examples of the consequences of this test are shown for illustration.« less

  2. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    2015-06-15

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less

  3. Predictability of Seasonal Rainfall over the Greater Horn of Africa

    NASA Astrophysics Data System (ADS)

    Ngaina, J. N.

    2016-12-01

    The El Nino-Southern Oscillation (ENSO) is a primary mode of climate variability in the Greater of Africa (GHA). The expected impacts of climate variability and change on water, agriculture, and food resources in GHA underscore the importance of reliable and accurate seasonal climate predictions. The study evaluated different model selection criteria which included the Coefficient of determination (R2), Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Fisher information approximation (FIA). A forecast scheme based on the optimal model was developed to predict the October-November-December (OND) and March-April-May (MAM) rainfall. The predictability of GHA rainfall based on ENSO was quantified based on composite analysis, correlations and contingency tables. A test for field-significance considering the properties of finiteness and interdependence of the spatial grid was applied to avoid correlations by chance. The study identified FIA as the optimal model selection criterion. However, complex model selection criteria (FIA followed by BIC) performed better compared to simple approach (R2 and AIC). Notably, operational seasonal rainfall predictions over the GHA makes of simple model selection procedures e.g. R2. Rainfall is modestly predictable based on ENSO during OND and MAM seasons. El Nino typically leads to wetter conditions during OND and drier conditions during MAM. The correlations of ENSO indices with rainfall are statistically significant for OND and MAM seasons. Analysis based on contingency tables shows higher predictability of OND rainfall with the use of ENSO indices derived from the Pacific and Indian Oceans sea surfaces showing significant improvement during OND season. The predictability based on ENSO for OND rainfall is robust on a decadal scale compared to MAM. An ENSO-based scheme based on an optimal model selection criterion can thus provide skillful rainfall predictions over GHA. This study concludes that the negative phase of ENSO (La Niña) leads to dry conditions while the positive phase of ENSO (El Niño) anticipates enhanced wet conditions

  4. FORUM: A Suggestion for an Improved Vegetation Scheme for Local and Global Mapping and Monitoring.

    PubMed

    ADAMS

    1999-01-01

    / Understanding of global ecological problems is at least partly dependent on clear assessments of vegetation change, and such assessment is always dependent on the use of a vegetation classification scheme. Use of satellite remotely sensed data is the only practical means of carrying out any global-scale vegetation mapping exercise, but if the resulting maps are to be useful to most ecologists and conservationists, they must be closely tied to clearly defined features of vegetation on the ground. Furthermore, much of the mapping that does take place involves more local-scale description of field sites; for purposes of cost and practicality, such studies usually do not involve remote sensing using satellites. There is a need for a single scheme that integrates the smallest to the largest scale in a way that is meaningful to most environmental scientists. Existing schemes are unsatisfactory for this task; they are ambiguous, unnecessarily complex, and their categories do not correspond to common-sense definitions. In response to these problems, a simple structural-physiognomically based scheme with 23 fundamental categories is proposed here for mapping and monitoring on any scale, from local to global. The fundamental categories each subdivide into more specific structural categories for more detailed mapping, but all the categories can be used throughout the world and at any scale, allowing intercomparison between regions. The next stage in the process will be to obtain the views of as many people working in as many different fields as possible, to see whether the proposed scheme suits their needs and how it should be modified. With a few modifications, such a scheme could easily be appended to an existing land cover classification scheme, such as the FAO system, greatly increasing the usefulness and accessability of the results of the landcover classification. KEY WORDS: Vegetation scheme; Mapping; Monitoring; Land cover

  5. R&D incentives for neglected diseases.

    PubMed

    Dimitri, Nicola

    2012-01-01

    Neglected diseases are typically characterized as those for which adequate drug treatment is lacking, and the potential return on effort in research and development (R&D), to produce new therapies, is too small for companies to invest significant resources in the field. In recent years various incentives schemes to stimulate R&D by pharmaceutical firms have been considered. Broadly speaking, these can be classified either as 'push' or 'pull' programs. Hybrid options, that include push and pull incentives, have also become increasingly popular. Supporters and critics of these various incentive schemes have argued in favor of their relative merits and limitations, although the view that no mechanism is a perfect fit for all situations appears to be widely held. For this reason, the debate on the advantages and disadvantages of different approaches has been important for policy decisions, but is dispersed in a variety of sources. With this in mind, the aim of this paper is to contribute to the understanding of the economic determinants behind R&D investments for neglected diseases by comparing the relative strength of different incentive schemes within a simple economic model, based on the assumption of profit maximizing firms. The analysis suggests that co-funded push programs are generally more efficient than pure pull programs. However, by setting appropriate intermediate goals hybrid incentive schemes could further improve efficiency.

  6. Generating multi-photon W-like states for perfect quantum teleportation and superdense coding

    NASA Astrophysics Data System (ADS)

    Li, Ke; Kong, Fan-Zhen; Yang, Ming; Ozaydin, Fatih; Yang, Qing; Cao, Zhuo-Liang

    2016-08-01

    An interesting aspect of multipartite entanglement is that for perfect teleportation and superdense coding, not the maximally entangled W states but a special class of non-maximally entangled W-like states are required. Therefore, efficient preparation of such W-like states is of great importance in quantum communications, which has not been studied as much as the preparation of W states. In this paper, we propose a simple optical scheme for efficient preparation of large-scale polarization-based entangled W-like states by fusing two W-like states or expanding a W-like state with an ancilla photon. Our scheme can also generate large-scale W states by fusing or expanding W or even W-like states. The cost analysis shows that in generating large-scale W states, the fusion mechanism achieves a higher efficiency with non-maximally entangled W-like states than maximally entangled W states. Our scheme can also start fusion or expansion with Bell states, and it is composed of a polarization-dependent beam splitter, two polarizing beam splitters and photon detectors. Requiring no ancilla photon or controlled gate to operate, our scheme can be realized with the current photonics technology and we believe it enable advances in quantum teleportation and superdense coding in multipartite settings.

  7. Efficient Visible-Light-Driven Z-Scheme Overall Water Splitting Using a MgTa2O(6-x)N(y)/TaON Heterostructure Photocatalyst for H2 Evolution.

    PubMed

    Chen, Shanshan; Qi, Yu; Hisatomi, Takashi; Ding, Qian; Asai, Tomohiro; Li, Zheng; Ma, Su Su Khine; Zhang, Fuxiang; Domen, Kazunari; Li, Can

    2015-07-13

    An (oxy)nitride-based heterostructure for powdered Z-scheme overall water splitting is presented. Compared with the single MgTa2O(6-x)N(y) or TaON photocatalyst, a MgTa2O(6-x)N(y)/TaON heterostructure fabricated by a simple one-pot nitridation route was demonstrated to effectively suppress the recombination of carriers by efficient spatial charge separation and decreased defect density. By employing Pt-loaded MgTa2O(6-x)N(y)/TaON as a H2-evolving photocatalyst, a Z-scheme overall water splitting system with an apparent quantum efficiency (AQE) of 6.8% at 420 nm was constructed (PtO(x)-WO3 and IO3(-)/I(-) pairs were used as an O2-evolving photocatalyst and a redox mediator, respectively), the activity of which is circa 7 or 360 times of that using Pt-TaON or Pt-MgTa2O(6-x)N)y) as a H2-evolving photocatalyst, respectively. To the best of our knowledge, this is the highest AQE among the powdered Z-scheme overall water splitting systems ever reported. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. MRI-based treatment planning with pseudo CT generated through atlas registration.

    PubMed

    Uh, Jinsoo; Merchant, Thomas E; Li, Yimei; Li, Xingyu; Hua, Chiaho

    2014-05-01

    To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787-0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%-98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.

  9. MRI-based treatment planning with pseudo CT generated through atlas registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uh, Jinsoo, E-mail: jinsoo.uh@stjude.org; Merchant, Thomas E.; Hua, Chiaho

    2014-05-15

    Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration ofmore » conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.« less

  10. MRI-based treatment planning with pseudo CT generated through atlas registration

    PubMed Central

    Uh, Jinsoo; Merchant, Thomas E.; Li, Yimei; Li, Xingyu; Hua, Chiaho

    2014-01-01

    Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs. PMID:24784377

  11. A new flux splitting scheme

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1993-01-01

    A new flux splitting scheme is proposed. The scheme is remarkably simple and yet its accuracy rivals and in some cases surpasses that of Roe's solver in the Euler and Navier-Stokes solutions performed in this study. The scheme is robust and converges as fast as the Roe splitting. An approximately defined cell-face advection Mach number is proposed using values from the two straddling cells via associated characteristic speeds. This interface Mach number is then used to determine the upwind extrapolation for the convective quantities. Accordingly, the name of the scheme is coined as Advection Upstream Splitting Method (AUSM). A new pressure splitting is introduced which is shown to behave successfully, yielding much smoother results than other existing pressure splittings. Of particular interest is the supersonic blunt body problem in which the Roe scheme gives anomalous solutions. The AUSM produces correct solutions without difficulty for a wide range of flow conditions as well as grids.

  12. A new flux splitting scheme

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1991-01-01

    A new flux splitting scheme is proposed. The scheme is remarkably simple and yet its accuracy rivals and in some cases surpasses that of Roe's solver in the Euler and Navier-Stokes solutions performed in this study. The scheme is robust and converges as fast as the Roe splitting. An approximately defined cell-face advection Mach number is proposed using values from the two straddling cells via associated characteristic speeds. This interface Mach number is then used to determine the upwind extrapolation for the convective quantities. Accordingly, the name of the scheme is coined as Advection Upstream Splitting Method (AUSM). A new pressure splitting is introduced which is shown to behave successfully, yielding much smoother results than other existing pressure splittings. Of particular interest is the supersonic blunt body problem in which the Roe scheme gives anomalous solutions. The AUSM produces correct solutions without difficulty for a wide range of flow conditions as well as grids.

  13. Case studies in configuration control for redundant robots

    NASA Technical Reports Server (NTRS)

    Seraji, H.; Lee, T.; Colbaugh, R.; Glass, K.

    1989-01-01

    A simple approach to configuration control of redundant robots is presented. The redundancy is utilized to control the robot configuration directly in task space, where the task will be performed. A number of task-related kinematic functions are defined and combined with the end-effector coordinates to form a set of configuration variables. An adaptive control scheme is then utilized to ensure that the configuration variables track the desired reference trajectories as closely as possible. Simulation results are presented to illustrate the control scheme. The scheme has also been implemented for direct online control of a PUMA industrial robot, and experimental results are presented. The simulation and experimental results validate the configuration control scheme for performing various realistic tasks.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Haixia; Zhang, Jing

    We propose a scheme for continuous-variable quantum cloning of coherent states with phase-conjugate input modes using linear optics. The quantum cloning machine yields M identical optimal clones from N replicas of a coherent state and N replicas of its phase conjugate. This scheme can be straightforwardly implemented with the setups accessible at present since its optical implementation only employs simple linear optical elements and homodyne detection. Compared with the original scheme for continuous-variable quantum cloning with phase-conjugate input modes proposed by Cerf and Iblisdir [Phys. Rev. Lett. 87, 247903 (2001)], which utilized a nondegenerate optical parametric amplifier, our scheme losesmore » the output of phase-conjugate clones and is regarded as irreversible quantum cloning.« less

  15. A semi-implicit level set method for multiphase flows and fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Cottet, Georges-Henri; Maitre, Emmanuel

    2016-06-01

    In this paper we present a novel semi-implicit time-discretization of the level set method introduced in [8] for fluid-structure interaction problems. The idea stems from a linear stability analysis derived on a simplified one-dimensional problem. The semi-implicit scheme relies on a simple filter operating as a pre-processing on the level set function. It applies to multiphase flows driven by surface tension as well as to fluid-structure interaction problems. The semi-implicit scheme avoids the stability constraints that explicit scheme need to satisfy and reduces significantly the computational cost. It is validated through comparisons with the original explicit scheme and refinement studies on two-dimensional benchmarks.

  16. Comparing the performance of flat and hierarchical Habitat/Land-Cover classification models in a NATURA 2000 site

    NASA Astrophysics Data System (ADS)

    Gavish, Yoni; O'Connell, Jerome; Marsh, Charles J.; Tarantino, Cristina; Blonda, Palma; Tomaselli, Valeria; Kunin, William E.

    2018-02-01

    The increasing need for high quality Habitat/Land-Cover (H/LC) maps has triggered considerable research into novel machine-learning based classification models. In many cases, H/LC classes follow pre-defined hierarchical classification schemes (e.g., CORINE), in which fine H/LC categories are thematically nested within more general categories. However, none of the existing machine-learning algorithms account for this pre-defined hierarchical structure. Here we introduce a novel Random Forest (RF) based application of hierarchical classification, which fits a separate local classification model in every branching point of the thematic tree, and then integrates all the different local models to a single global prediction. We applied the hierarchal RF approach in a NATURA 2000 site in Italy, using two land-cover (CORINE, FAO-LCCS) and one habitat classification scheme (EUNIS) that differ from one another in the shape of the class hierarchy. For all 3 classification schemes, both the hierarchical model and a flat model alternative provided accurate predictions, with kappa values mostly above 0.9 (despite using only 2.2-3.2% of the study area as training cells). The flat approach slightly outperformed the hierarchical models when the hierarchy was relatively simple, while the hierarchical model worked better under more complex thematic hierarchies. Most misclassifications came from habitat pairs that are thematically distant yet spectrally similar. In 2 out of 3 classification schemes, the additional constraints of the hierarchical model resulted with fewer such serious misclassifications relative to the flat model. The hierarchical model also provided valuable information on variable importance which can shed light into "black-box" based machine learning algorithms like RF. We suggest various ways by which hierarchical classification models can increase the accuracy and interpretability of H/LC classification maps.

  17. Large time-step stability of explicit one-dimensional advection schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.

  18. Construction of mutually unbiased bases with cyclic symmetry for qubit systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seyfarth, Ulrich; Ranade, Kedar S.

    2011-10-15

    For the complete estimation of arbitrary unknown quantum states by measurements, the use of mutually unbiased bases has been well established in theory and experiment for the past 20 years. However, most constructions of these bases make heavy use of abstract algebra and the mathematical theory of finite rings and fields, and no simple and generally accessible construction is available. This is particularly true in the case of a system composed of several qubits, which is arguably the most important case in quantum information science and quantum computation. In this paper, we close this gap by providing a simple andmore » straightforward method for the construction of mutually unbiased bases in the case of a qubit register. We show that our construction is also accessible to experiments, since only Hadamard and controlled-phase gates are needed, which are available in most practical realizations of a quantum computer. Moreover, our scheme possesses the optimal scaling possible, i.e., the number of gates scales only linearly in the number of qubits.« less

  19. Deterministic diffusion in flower-shaped billiards.

    PubMed

    Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre

    2002-08-01

    We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.

  20. Simulating Freshwater Availability under Future Climate Conditions

    NASA Astrophysics Data System (ADS)

    Zhao, F.; Zeng, N.; Motesharrei, S.; Gustafson, K. C.; Rivas, J.; Miralles-Wilhelm, F.; Kalnay, E.

    2013-12-01

    Freshwater availability is a key factor for regional development. Precipitation, evaporation, river inflow and outflow are the major terms in the estimate of regional water supply. In this study, we aim to obtain a realistic estimate for these variables from 1901 to 2100. First we calculated the ensemble mean precipitation using the 2011-2100 RCP4.5 output (re-sampled to half-degree spatial resolution) from 16 General Circulation Models (GCMs) participating the Coupled Model Intercomparison Project Phase 5 (CMIP5). The projections are then combined with the half-degree 1901-2010 Climate Research Unit (CRU) TS3.2 dataset after bias correction. We then used the combined data to drive our UMD Earth System Model (ESM), in order to generate evaporation and runoff. We also developed a River-Routing Scheme based on the idea of Taikan Oki, as part of the ESM. It is capable of calculating river inflow and outflow for any region, driven by the gridded runoff output. River direction and slope information from Global Dominant River Tracing (DRT) dataset are included in our scheme. The effects of reservoirs/dams are parameterized based on a few simple factors such as soil moisture, population density and geographic regions. Simulated river flow is validated with river gauge measurements for the world's major rivers. We have applied our river flow calculation to two data-rich watersheds in the United States: Phoenix AMA watershed and the Potomac River Basin. The results are used in our SImple WAter model (SIWA) to explore water management options.

  1. Impact of Vegetation Cover Fraction Parameterization schemes on Land Surface Temperature Simulation in the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Lv, M.; Li, C.; Lu, H.; Yang, K.; Chen, Y.

    2017-12-01

    The parameterization of vegetation cover fraction (VCF) is an important component of land surface models. This paper investigates the impacts of three VCF parameterization schemes on land surface temperature (LST) simulation by the Common Land Model (CoLM) in the Tibetan Plateau (TP). The first scheme is a simple land cover (LC) based method; the second one is based on remote sensing observation (hereafter named as RNVCF) , in which multi-year climatology VCFs is derived from Moderate-resolution Imaging Spectroradiometer (MODIS) NDVI (Normalized Difference Vegetation Index); the third VCF parameterization scheme derives VCF from the LAI simulated by LSM and clump index at every model time step (hereafter named as SMVCF). Simulated land surface temperature(LST) and soil temperature by CoLM with three VCF parameterization schemes were evaluated by using satellite LST observation and in situ soil temperature observation, respectively, during the period of 2010 to 2013. The comparison against MODIS Aqua LST indicates that (1) CTL produces large biases for both four seasons in early afternoon (about 13:30, local solar time), while the mean bias in spring reach to 12.14K; (2) RNVCF and SMVCF reduce the mean bias significantly, especially in spring as such reduce is about 6.5K. Surface soil temperature observed at 5 cm depth from three soil moisture and temperature monitoring networks is also employed to assess the skill of three VCF schemes. The three networks, crossing TP from West to East, have different climate and vegetation conditions. In the Ngari network, located in the Western TP with an arid climate, there are not obvious differences among three schemes. In Naqu network, located in central TP with a semi-arid climate condition, CTL shows a severe overestimates (12.1 K), but such overestimations can be reduced by 79% by RNVCF and 87% by SMVCF. In the third humid network (Maqu in eastern TP), CoLM performs similar to Naqu. However, at both Naqu and Maqu networks, RNVCF shows significant overestimation in summer, perhaps due to RNVCF ignores the growing characteristics of vegetation (mainly grass) in these two regions. Our results demonstrate that VCF schemes have significant influence on LSM performance, and indicate that it is important to consider vegetation growing characteristics in VCF schemes for different LCs.

  2. The Mine Locomotive Wireless Network Strategy Based on Successive Interference Cancellation

    PubMed Central

    Wu, Liaoyuan; Han, Jianghong; Wei, Xing; Shi, Lei; Ding, Xu

    2015-01-01

    We consider a wireless network strategy based on successive interference cancellation (SIC) for mine locomotives. We firstly build the original mathematical model for the strategy which is a non-convex model. Then, we examine this model intensively, and figure out that there are certain regulations embedded in it. Based on these findings, we are able to reformulate the model into a new form and design a simple algorithm which can assign each locomotive with a proper transmitting scheme during the whole schedule procedure. Simulation results show that the outcomes obtained through this algorithm are improved by around 50% compared with those that do not apply the SIC technique. PMID:26569240

  3. A robust trust establishment scheme for wireless sensor networks.

    PubMed

    Ishmanov, Farruh; Kim, Sung Won; Nam, Seung Yeob

    2015-03-23

    Security techniques like cryptography and authentication can fail to protect a network once a node is compromised. Hence, trust establishment continuously monitors and evaluates node behavior to detect malicious and compromised nodes. However, just like other security schemes, trust establishment is also vulnerable to attack. Moreover, malicious nodes might misbehave intelligently to trick trust establishment schemes. Unfortunately, attack-resistance and robustness issues with trust establishment schemes have not received much attention from the research community. Considering the vulnerability of trust establishment to different attacks and the unique features of sensor nodes in wireless sensor networks, we propose a lightweight and robust trust establishment scheme. The proposed trust scheme is lightweight thanks to a simple trust estimation method. The comprehensiveness and flexibility of the proposed trust estimation scheme make it robust against different types of attack and misbehavior. Performance evaluation under different types of misbehavior and on-off attacks shows that the detection rate of the proposed trust mechanism is higher and more stable compared to other trust mechanisms.

  4. CENTERA: A Centralized Trust-Based Efficient Routing Protocol with Authentication for Wireless Sensor Networks †

    PubMed Central

    Tajeddine, Ayman; Kayssi, Ayman; Chehab, Ali; Elhajj, Imad; Itani, Wassim

    2015-01-01

    In this paper, we present CENTERA, a CENtralized Trust-based Efficient Routing protocol with an appropriate authentication scheme for wireless sensor networks (WSN). CENTERA utilizes the more powerful base station (BS) to gather minimal neighbor trust information from nodes and calculate the best routes after isolating different types of “bad” nodes. By periodically accumulating these simple local observations and approximating the nodes' battery lives, the BS draws a global view of the network, calculates three quality metrics—maliciousness, cooperation, and compatibility—and evaluates the Data Trust and Forwarding Trust values of each node. Based on these metrics, the BS isolates “bad”, “misbehaving” or malicious nodes for a certain period, and put some nodes on probation. CENTERA increases the node's bad/probation level with repeated “bad” behavior, and decreases it otherwise. Then it uses a very efficient method to distribute the routing information to “good” nodes. Based on its target environment, and if required, CENTERA uses an authentication scheme suitable for severely constrained nodes, ranging from the symmetric RC5 for safe environments under close administration, to pairing-based cryptography (PBC) for hostile environments with a strong attacker model. We simulate CENTERA using TOSSIM and verify its correctness and show some energy calculations. PMID:25648712

  5. CENTERA: a centralized trust-based efficient routing protocol with authentication for wireless sensor networks.

    PubMed

    Tajeddine, Ayman; Kayssi, Ayman; Chehab, Ali; Elhajj, Imad; Itani, Wassim

    2015-02-02

    In this paper, we present CENTERA, a CENtralized Trust-based Efficient Routing protocol with an appropriate authentication scheme for wireless sensor networks (WSN). CENTERA utilizes the more powerful base station (BS) to gather minimal neighbor trust information from nodes and calculate the best routes after isolating different types of "bad" nodes. By periodically accumulating these simple local observations and approximating the nodes' battery lives, the BS draws a global view of the network, calculates three quality metrics-maliciousness, cooperation, and compatibility-and evaluates the Data Trust and Forwarding Trust values of each node. Based on these metrics, the BS isolates "bad", "misbehaving" or malicious nodes for a certain period, and put some nodes on probation. CENTERA increases the node's bad/probation level with repeated "bad" behavior, and decreases it otherwise. Then it uses a very efficient method to distribute the routing information to "good" nodes. Based on its target environment, and if required, CENTERA uses an authentication scheme suitable for severely constrained nodes, ranging from the symmetric RC5 for safe environments under close administration, to pairing-based cryptography (PBC) for hostile environments with a strong attacker model. We simulate CENTERA using TOSSIM and verify its correctness and show some energy calculations.

  6. A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging

    PubMed Central

    Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.

    2014-01-01

    Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990

  7. Comment on "Scrutinizing the carbon cycle and CO2residence time in the atmosphere" by H. Harde

    NASA Astrophysics Data System (ADS)

    Köhler, Peter; Hauck, Judith; Völker, Christoph; Wolf-Gladrow, Dieter A.; Butzin, Martin; Halpern, Joshua B.; Rice, Ken; Zeebe, Richard E.

    2018-05-01

    Harde (2017) proposes an alternative accounting scheme for the modern carbon cycle and concludes that only 4.3% of today's atmospheric CO2 is a result of anthropogenic emissions. As we will show, this alternative scheme is too simple, is based on invalid assumptions, and does not address many of the key processes involved in the global carbon cycle that are important on the timescale of interest. Harde (2017) therefore reaches an incorrect conclusion about the role of anthropogenic CO2 emissions. Harde (2017) tries to explain changes in atmospheric CO2 concentration with a single equation, while the most simple model of the carbon cycle must at minimum contain equations of at least two reservoirs (the atmosphere and the surface ocean), which are solved simultaneously. A single equation is fundamentally at odds with basic theory and observations. In the following we will (i) clarify the difference between CO2 atmospheric residence time and adjustment time, (ii) present recently published information about anthropogenic carbon, (iii) present details about the processes that are missing in Harde (2017), (iv) briefly discuss shortcoming in Harde's generalization to paleo timescales, (v) and comment on deficiencies in some of the literature cited in Harde (2017).

  8. Single-Grasp Object Classification and Feature Extraction with Simple Robot Hands and Tactile Sensors.

    PubMed

    Spiers, Adam J; Liarokapis, Minas V; Calli, Berk; Dollar, Aaron M

    2016-01-01

    Classical robotic approaches to tactile object identification often involve rigid mechanical grippers, dense sensor arrays, and exploratory procedures (EPs). Though EPs are a natural method for humans to acquire object information, evidence also exists for meaningful tactile property inference from brief, non-exploratory motions (a 'haptic glance'). In this work, we implement tactile object identification and feature extraction techniques on data acquired during a single, unplanned grasp with a simple, underactuated robot hand equipped with inexpensive barometric pressure sensors. Our methodology utilizes two cooperating schemes based on an advanced machine learning technique (random forests) and parametric methods that estimate object properties. The available data is limited to actuator positions (one per two link finger) and force sensors values (eight per finger). The schemes are able to work both independently and collaboratively, depending on the task scenario. When collaborating, the results of each method contribute to the other, improving the overall result in a synergistic fashion. Unlike prior work, the proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations. Due to these factors, the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads.

  9. A simple smoothness indicator for the WENO scheme with adaptive order

    NASA Astrophysics Data System (ADS)

    Huang, Cong; Chen, Li Li

    2018-01-01

    The fifth order WENO scheme with adaptive order is competent for solving hyperbolic conservation laws, its reconstruction is a convex combination of a fifth order linear reconstruction and three third order linear reconstructions. Note that, on uniform mesh, the computational cost of smoothness indicator for fifth order linear reconstruction is comparable with the sum of ones for three third order linear reconstructions, thus it is too heavy; on non-uniform mesh, the explicit form of smoothness indicator for fifth order linear reconstruction is difficult to be obtained, and its computational cost is much heavier than the one on uniform mesh. In order to overcome these problems, a simple smoothness indicator for fifth order linear reconstruction is proposed in this paper.

  10. The low-cost microwave plasma sources for science and industry applications

    NASA Astrophysics Data System (ADS)

    Tikhonov, V. N.; Aleshin, S. N.; Ivanov, I. A.; Tikhonov, A. V.

    2017-11-01

    Microwave plasma torches proposed in the world market are built according to a scheme that can be called classical: power supply - magnetron head - microwave isolator with water load - reflected power meter - matching device - actual plasma torch - sliding short circuit. The total cost of devices from this list with a microwave generator of 3 kW in the performance, for example, of SAIREM (France), is about 17,000 €. We have changed the classical scheme of the microwave plasmathrone and optimised design of the waveguide channel. As a result, we can supply simple and reliable sources of microwave plasma (complete with our low-budget microwave generator up to 3 kW and a simple plasmathrone of atmospheric pressure) at a price from 3,000 €.

  11. Deployment Design of Wireless Sensor Network for Simple Multi-Point Surveillance of a Moving Target

    PubMed Central

    Tsukamoto, Kazuya; Ueda, Hirofumi; Tamura, Hitomi; Kawahara, Kenji; Oie, Yuji

    2009-01-01

    In this paper, we focus on the problem of tracking a moving target in a wireless sensor network (WSN), in which the capability of each sensor is relatively limited, to construct large-scale WSNs at a reasonable cost. We first propose two simple multi-point surveillance schemes for a moving target in a WSN and demonstrate that one of the schemes can achieve high tracking probability with low power consumption. In addition, we examine the relationship between tracking probability and sensor density through simulations, and then derive an approximate expression representing the relationship. As the results, we present guidelines for sensor density, tracking probability, and the number of monitoring sensors that satisfy a variety of application demands. PMID:22412326

  12. Simple, fast, and low-cost camera-based water content measurement with colorimetric fluorescent indicator

    NASA Astrophysics Data System (ADS)

    Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik

    2018-05-01

    Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.

  13. Taxonomic and phytogeographic implications from ITS phylogeny in Berberis (Berberidaceae).

    PubMed

    Kim, Young-Dong; Kim, Sung-Hee; Landrum, Leslie R

    2004-06-01

    A phylogeny based on the internal transcribed spacer (ITS) sequences from 79 taxa representing much of the diversity of Berberis L. (four major groups and 22 sections) was constructed for the first time. The phylogeny was basically congruent with the previous classification schemes at higher taxonomic levels, such as groups and subgroups. A notable exception is the non-monophyly of the group Occidentales of compound-leaved Berberis (previously separated as Mahonia). At lower levels, however, most of previous sections and subsections were not evident especially in simple-leaved Berberis. Possible relationship between section Horridae (group Occidentales) and the simple-leaved Berberis clade implies paraphyly of the compound-leaved Berberis. A well-known South America-Old World (mainly Asia) disjunctive distribution pattern of the simple-leaved Berberis is explained by a vicariance event occurring in the Cretaceous period. The ITS phylogeny also suggests that a possible connection between the Asian and South American groups through the North American species ( Berberis canadensis or B. fendleri) is highly unlikely.

  14. A simple algorithm for distance estimation without radar and stereo vision based on the bionic principle of bee eyes

    NASA Astrophysics Data System (ADS)

    Khamukhin, A. A.

    2017-02-01

    Simple navigation algorithms are needed for small autonomous unmanned aerial vehicles (UAVs). These algorithms can be implemented in a small microprocessor with low power consumption. This will help to reduce the weight of the UAVs computing equipment and to increase the flight range. The proposed algorithm uses only the number of opaque channels (ommatidia in bees) through which a target can be seen by moving an observer from location 1 to 2 toward the target. The distance estimation is given relative to the distance between locations 1 and 2. The simple scheme of an appositional compound eye to develop calculation formula is proposed. The distance estimation error analysis shows that it decreases with an increase of the total number of opaque channels to a certain limit. An acceptable error of about 2 % is achieved with the angle of view from 3 to 10° when the total number of opaque channels is 21600.

  15. Pseudorandom Noise Code-Based Technique for Cloud and Aerosol Discrimination Applications

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Prasad, Narasimha S.; Flood, Michael A.; Harrison, Fenton Wallace

    2011-01-01

    NASA Langley Research Center is working on a continuous wave (CW) laser based remote sensing scheme for the detection of CO2 and O2 from space based platforms suitable for ACTIVE SENSING OF CO2 EMISSIONS OVER NIGHTS, DAYS, AND SEASONS (ASCENDS) mission. ASCENDS is a future space-based mission to determine the global distribution of sources and sinks of atmospheric carbon dioxide (CO2). A unique, multi-frequency, intensity modulated CW (IMCW) laser absorption spectrometer (LAS) operating at 1.57 micron for CO2 sensing has been developed. Effective aerosol and cloud discrimination techniques are being investigated in order to determine concentration values with accuracies less than 0.3%. In this paper, we discuss the demonstration of a PN code based technique for cloud and aerosol discrimination applications. The possibility of using maximum length (ML)-sequences for range and absorption measurements is investigated. A simple model for accomplishing this objective is formulated, Proof-of-concept experiments carried out using SONAR based LIDAR simulator that was built using simple audio hardware provided promising results for extension into optical wavelengths. Keywords: ASCENDS, CO2 sensing, O2 sensing, PN codes, CW lidar

  16. An equation of state for polyurea aerogel based on multi-shock response

    NASA Astrophysics Data System (ADS)

    Aslam, T. D.; Gustavsen, R. L.; Bartram, B. D.

    2014-05-01

    The equation of state (EOS) of polyurea aerogel (PUA) is examined through both single shock Hugoniot data as well as more recent multi-shock compression experiments performed on the LANL 2-stage gas gun. A simple conservative Lagrangian numerical scheme, utilizing total variation diminishing (TVD) interpolation and an approximate Riemann solver, will be presented as well as the methodology of calibration. It will been demonstrated that a p-a model based on a Mie-Gruneisen fitting form for the solid material can reasonably replicate multi-shock compression response at a variety of initial densities; such a methodology will be presented for a commercially available polyurea aerogel.

  17. Statistical mechanics of broadcast channels using low-density parity-check codes.

    PubMed

    Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David

    2003-03-01

    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

  18. The theory and implementation of a high quality pulse width modulated waveform synthesiser applicable to voltage FED inverters

    NASA Astrophysics Data System (ADS)

    Lower, Kim Nigel

    1985-03-01

    Modulation processes associated with the digital implementation of pulse width modulation (PWM) switching strategies were examined. A software package based on a portable turnkey structure is presented. Waveform synthesizer implementation techniques are reviewed. A three phase PWM waveform synthesizer for voltage fed inverters was realized. It is based on a constant carrier frequency of 18 kHz and a regular sample, single edge, asynchronous PWM switching scheme. With high carrier frequencies, it is possible to utilize simple switching strategies and as a consequence, many advantages are highlighted, emphasizing the importance to industrial and office markets.

  19. Adaptive color halftoning for minimum perceived error using the blue noise mask

    NASA Astrophysics Data System (ADS)

    Yu, Qing; Parker, Kevin J.

    1997-04-01

    Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.

  20. Critical analysis of fragment-orbital DFT schemes for the calculation of electronic coupling values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schober, Christoph; Reuter, Karsten; Oberhofer, Harald, E-mail: harald.oberhofer@ch.tum.de

    2016-02-07

    We present a critical analysis of the popular fragment-orbital density-functional theory (FO-DFT) scheme for the calculation of electronic coupling values. We discuss the characteristics of different possible formulations or “flavors” of the scheme which differ by the number of electrons in the calculation of the fragments and the construction of the Hamiltonian. In addition to two previously described variants based on neutral fragments, we present a third version taking a different route to the approximate diabatic state by explicitly considering charged fragments. In applying these FO-DFT flavors to the two molecular test sets HAB7 (electron transfer) and HAB11 (hole transfer),more » we find that our new scheme gives improved electronic couplings for HAB7 (−6.2% decrease in mean relative signed error) and greatly improved electronic couplings for HAB11 (−15.3% decrease in mean relative signed error). A systematic investigation of the influence of exact exchange on the electronic coupling values shows that the use of hybrid functionals in FO-DFT calculations improves the electronic couplings, giving values close to or even better than more sophisticated constrained DFT calculations. Comparing the accuracy and computational cost of each variant, we devise simple rules to choose the best possible flavor depending on the task. For accuracy, our new scheme with charged-fragment calculations performs best, while numerically more efficient at reasonable accuracy is the variant with neutral fragments.« less

  1. Evaluation of multicast schemes in optical burst-switched networks: the case with dynamic sessions

    NASA Astrophysics Data System (ADS)

    Jeong, Myoungki; Qiao, Chunming; Xiong, Yijun; Vandenhoute, Marc

    2000-10-01

    In this paper, we evaluate the performance of several multicast schemes in optical burst-switched WDM networks taking into accounts the overheads due to control packets and guard bands (Gbs) of bursts on separate channels (wavelengths). A straightforward scheme is called Separate Multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to Gbs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called Multiple Unicasting (M-UCAST). The third scheme is called Tree-Shared Multicasting (TS-MCAST) wehreby multicast traffic belonging to multiple multicast sesions can be mixed together in a burst, which is delivered via a shared multicast tree. In [1], we have evaluated several multicast schemes with static sessions at the flow level. In this paper, we perform a simple analysis for the multicast schemes and evaluate the performance of three multicast schemes, focusing on the case with dynamic sessions in terms of the link utilization, bandwidth consumption, blocking (loss) probability, goodput and the processing loads.

  2. Stable time filtering of strongly unstable spatially extended systems

    PubMed Central

    Grote, Marcus J.; Majda, Andrew J.

    2006-01-01

    Many contemporary problems in science involve making predictions based on partial observation of extremely complicated spatially extended systems with many degrees of freedom and with physical instabilities on both large and small scale. Various new ensemble filtering strategies have been developed recently for these applications, and new mathematical issues arise. Because ensembles are extremely expensive to generate, one such issue is whether it is possible under appropriate circumstances to take long time steps in an explicit difference scheme and violate the classical Courant–Friedrichs–Lewy (CFL)-stability condition yet obtain stable accurate filtering by using the observations. These issues are explored here both through elementary mathematical theory, which provides simple guidelines, and the detailed study of a prototype model. The prototype model involves an unstable finite difference scheme for a convection–diffusion equation, and it is demonstrated below that appropriate observations can result in stable accurate filtering of this strongly unstable spatially extended system. PMID:16682626

  3. Stable time filtering of strongly unstable spatially extended systems.

    PubMed

    Grote, Marcus J; Majda, Andrew J

    2006-05-16

    Many contemporary problems in science involve making predictions based on partial observation of extremely complicated spatially extended systems with many degrees of freedom and with physical instabilities on both large and small scale. Various new ensemble filtering strategies have been developed recently for these applications, and new mathematical issues arise. Because ensembles are extremely expensive to generate, one such issue is whether it is possible under appropriate circumstances to take long time steps in an explicit difference scheme and violate the classical Courant-Friedrichs-Lewy (CFL)-stability condition yet obtain stable accurate filtering by using the observations. These issues are explored here both through elementary mathematical theory, which provides simple guidelines, and the detailed study of a prototype model. The prototype model involves an unstable finite difference scheme for a convection-diffusion equation, and it is demonstrated below that appropriate observations can result in stable accurate filtering of this strongly unstable spatially extended system.

  4. X-ray phase contrast imaging of biological specimens with femtosecond pulses of betatron radiation from a compact laser plasma wakefield accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kneip, S.; Center for Ultrafast Optical Science, University of Michigan, Ann Arbor 48109; McGuffey, C.

    2011-08-29

    We show that x-rays from a recently demonstrated table top source of bright, ultrafast, coherent synchrotron radiation [Kneip et al., Nat. Phys. 6, 980 (2010)] can be applied to phase contrast imaging of biological specimens. Our scheme is based on focusing a high power short pulse laser in a tenuous gas jet, setting up a plasma wakefield accelerator that accelerates and wiggles electrons analogously to a conventional synchrotron, but on the centimeter rather than tens of meter scale. We use the scheme to record absorption and phase contrast images of a tetra fish, damselfly and yellow jacket, in particular highlightingmore » the contrast enhancement achievable with the simple propagation technique of phase contrast imaging. Coherence and ultrafast pulse duration will allow for the study of various aspects of biomechanics.« less

  5. The genetic code as a periodic table: algebraic aspects.

    PubMed

    Bashford, J D; Jarvis, P D

    2000-01-01

    The systematics of indices of physico-chemical properties of codons and amino acids across the genetic code are examined. Using a simple numerical labelling scheme for nucleic acid bases, A=(-1,0), C=(0,-1), G=(0,1), U=(1,0), data can be fitted as low order polynomials of the six coordinates in the 64-dimensional codon weight space. The work confirms and extends the recent studies by Siemion et al. (1995. BioSystems 36, 231-238) of the conformational parameters. Fundamental patterns in the data such as codon periodicities, and related harmonics and reflection symmetries, are here associated with the structure of the set of basis monomials chosen for fitting. Results are plotted using the Siemion one-step mutation ring scheme, and variants thereof. The connections between the present work, and recent studies of the genetic code structure using dynamical symmetry algebras, are pointed out.

  6. Photonic crystal nanocavity assisted rejection ratio tunable notch microwave photonic filter

    PubMed Central

    Long, Yun; Xia, Jinsong; Zhang, Yong; Dong, Jianji; Wang, Jian

    2017-01-01

    Driven by the increasing demand on handing microwave signals with compact device, low power consumption, high efficiency and high reliability, it is highly desired to generate, distribute, and process microwave signals using photonic integrated circuits. Silicon photonics offers a promising platform facilitating ultracompact microwave photonic signal processing assisted by silicon nanophotonic devices. In this paper, we propose, theoretically analyze and experimentally demonstrate a simple scheme to realize ultracompact rejection ratio tunable notch microwave photonic filter (MPF) based on a silicon photonic crystal (PhC) nanocavity with fixed extinction ratio. Using a conventional modulation scheme with only a single phase modulator (PM), the rejection ratio of the presented MPF can be tuned from about 10 dB to beyond 60 dB. Moreover, the central frequency tunable operation in the high rejection ratio region is also demonstrated in the experiment. PMID:28067332

  7. Decoding mobile-phone image sensor rolling shutter effect for visible light communications

    NASA Astrophysics Data System (ADS)

    Liu, Yang

    2016-01-01

    Optical wireless communication (OWC) using visible lights, also known as visible light communication (VLC), has attracted significant attention recently. As the traditional OWC and VLC receivers (Rxs) are based on PIN photo-diode or avalanche photo-diode, deploying the complementary metal-oxide-semiconductor (CMOS) image sensor as the VLC Rx is attractive since nowadays nearly every person has a smart phone with embedded CMOS image sensor. However, deploying the CMOS image sensor as the VLC Rx is challenging. In this work, we propose and demonstrate two simple contrast ratio (CR) enhancement schemes to improve the contrast of the rolling shutter pattern. Then we describe their processing algorithms one by one. The experimental results show that both the proposed CR enhancement schemes can significantly mitigate the high-intensity fluctuations of the rolling shutter pattern and improve the bit-error-rate performance.

  8. An integral equation formulation for rigid bodies in Stokes flow in three dimensions

    NASA Astrophysics Data System (ADS)

    Corona, Eduardo; Greengard, Leslie; Rachh, Manas; Veerapaneni, Shravan

    2017-03-01

    We present a new derivation of a boundary integral equation (BIE) for simulating the three-dimensional dynamics of arbitrarily-shaped rigid particles of genus zero immersed in a Stokes fluid, on which are prescribed forces and torques. Our method is based on a single-layer representation and leads to a simple second-kind integral equation. It avoids the use of auxiliary sources within each particle that play a role in some classical formulations. We use a spectrally accurate quadrature scheme to evaluate the corresponding layer potentials, so that only a small number of spatial discretization points per particle are required. The resulting discrete sums are computed in O (n) time, where n denotes the number of particles, using the fast multipole method (FMM). The particle positions and orientations are updated by a high-order time-stepping scheme. We illustrate the accuracy, conditioning and scaling of our solvers with several numerical examples.

  9. PI controller design for indirect vector controlled induction motor: A decoupling approach.

    PubMed

    Jain, Jitendra Kr; Ghosh, Sandip; Maity, Somnath; Dworak, Pawel

    2017-09-01

    Decoupling of the stator currents is important for smoother torque response of indirect vector controlled induction motors. Typically, feedforward decoupling is used to take care of current coupling that requires exact knowledge of motor parameters, additional circuitry and signal processing. In this paper, a method is proposed to design the regulating proportional-integral gains that minimize coupling without any requirement of the additional decoupler. The variation of the coupling terms for change in load torque is considered as the performance measure. An iterative linear matrix inequality based H ∞ control design approach is used to obtain the controller gains. A comparison between the feedforward and the proposed decoupling schemes is presented through simulation and experimental results. The results show that the proposed scheme is simple yet effective even without additional block or burden on signal processing. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Photonic crystal nanocavity assisted rejection ratio tunable notch microwave photonic filter

    NASA Astrophysics Data System (ADS)

    Long, Yun; Xia, Jinsong; Zhang, Yong; Dong, Jianji; Wang, Jian

    2017-01-01

    Driven by the increasing demand on handing microwave signals with compact device, low power consumption, high efficiency and high reliability, it is highly desired to generate, distribute, and process microwave signals using photonic integrated circuits. Silicon photonics offers a promising platform facilitating ultracompact microwave photonic signal processing assisted by silicon nanophotonic devices. In this paper, we propose, theoretically analyze and experimentally demonstrate a simple scheme to realize ultracompact rejection ratio tunable notch microwave photonic filter (MPF) based on a silicon photonic crystal (PhC) nanocavity with fixed extinction ratio. Using a conventional modulation scheme with only a single phase modulator (PM), the rejection ratio of the presented MPF can be tuned from about 10 dB to beyond 60 dB. Moreover, the central frequency tunable operation in the high rejection ratio region is also demonstrated in the experiment.

  11. DebtRank-transparency: Controlling systemic risk in financial networks

    PubMed Central

    Thurner, Stefan; Poledna, Sebastian

    2013-01-01

    Nodes in a financial network, such as banks, cannot assess the true risks associated with lending to other nodes in the network, unless they have full information on the riskiness of all other nodes. These risks can be estimated by using network metrics (as DebtRank) of the interbank liability network. With a simple agent based model we show that systemic risk in financial networks can be drastically reduced by increasing transparency, i.e. making the DebtRank of individual banks visible to others, and by imposing a rule, that reduces interbank borrowing from systemically risky nodes. This scheme does not reduce the efficiency of the financial network, but fosters a more homogeneous risk-distribution within the system in a self-organized critical way. The reduction of systemic risk is due to a massive reduction of cascading failures in the transparent system. A regulation-policy implementation of the proposed scheme is discussed. PMID:23712454

  12. A simple extension of Roe's scheme for real gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arabi, Sina, E-mail: sina.arabi@polymtl.ca; Trépanier, Jean-Yves; Camarero, Ricardo

    The purpose of this paper is to develop a highly accurate numerical algorithm to model real gas flows in local thermodynamic equilibrium (LTE). The Euler equations are solved using a finite volume method based on Roe's flux difference splitting scheme including real gas effects. A novel algorithm is proposed to calculate the Jacobian matrix which satisfies the flux difference splitting exactly in the average state for a general equation of state. This algorithm increases the robustness and accuracy of the method, especially around the contact discontinuities and shock waves where the gas properties jump appreciably. The results are compared withmore » an exact solution of the Riemann problem for the shock tube which considers the real gas effects. In addition, the method is applied to a blunt cone to illustrate the capability of the proposed extension in solving two dimensional flows.« less

  13. Photonic crystal nanocavity assisted rejection ratio tunable notch microwave photonic filter.

    PubMed

    Long, Yun; Xia, Jinsong; Zhang, Yong; Dong, Jianji; Wang, Jian

    2017-01-09

    Driven by the increasing demand on handing microwave signals with compact device, low power consumption, high efficiency and high reliability, it is highly desired to generate, distribute, and process microwave signals using photonic integrated circuits. Silicon photonics offers a promising platform facilitating ultracompact microwave photonic signal processing assisted by silicon nanophotonic devices. In this paper, we propose, theoretically analyze and experimentally demonstrate a simple scheme to realize ultracompact rejection ratio tunable notch microwave photonic filter (MPF) based on a silicon photonic crystal (PhC) nanocavity with fixed extinction ratio. Using a conventional modulation scheme with only a single phase modulator (PM), the rejection ratio of the presented MPF can be tuned from about 10 dB to beyond 60 dB. Moreover, the central frequency tunable operation in the high rejection ratio region is also demonstrated in the experiment.

  14. Robust and Simple Non-Reflecting Boundary Conditions for the Euler Equations - A New Approach based on the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Chang, S.-C.; Himansu, A.; Loh, C.-Y.; Wang, X.-Y.; Yu, S.-T.J.

    2005-01-01

    This paper reports on a significant advance in the area of nonreflecting boundary conditions (NRBCs) for unsteady flow computations. As a part of t he development of t he space-time conservation element and solution element (CE/SE) method, sets of NRBCs for 1D Euler problems are developed without using any characteristics- based techniques. These conditions are much simpler than those commonly reported in the literature, yet so robust that they are applicable to subsonic, transonic and supersonic flows even in the presence of discontinuities. In addition, the straightforward multidimensional extensions of the present 1D NRBCs have been shown numerically to be equally simple and robust. The paper details the theoretical underpinning of these NRBCs, and explains t heir unique robustness and accuracy in terms of t he conservation of space-time fluxes. Some numerical results for an extended Sod's shock-tube problem, illustrating the effectiveness of the present NRBCs are included, together with an associated simple Fortran computer program. As a preliminary to the present development, a review of the basic CE/SE schemes is also included.

  15. A low-cost, tunable laser lock without laser frequency modulation

    NASA Astrophysics Data System (ADS)

    Shea, Margaret E.; Baker, Paul M.; Gauthier, Daniel J.

    2015-05-01

    Many experiments in optical physics require laser frequency stabilization. This can be achieved by locking to an atomic reference using saturated absorption spectroscopy. Often, the laser frequency is modulated and phase sensitive detection used. This method, while well-proven and robust, relies on expensive components, can introduce an undesirable frequency modulation into the laser, and is not easily frequency tuned. Here, we report a simple locking scheme similar to those implemented previously. We modulate the atomic resonances in a saturated absorption setup with an AC magnetic field created by a single solenoid. The same coil applies a DC field that allows tuning of the lock point. We use an auto-balanced detector to make our scheme more robust against laser power fluctuations and stray magnetic fields. The coil, its driver, and the detector are home-built with simple, cheap components. Our technique is low-cost, simple to setup, tunable, introduces no laser frequency modulation, and only requires one laser. We gratefully acknowledge the financial support of the NSF through Grant # PHY-1206040.

  16. Variational discretization of the nonequilibrium thermodynamics of simple systems

    NASA Astrophysics Data System (ADS)

    Gay-Balmaz, François; Yoshimura, Hiroaki

    2018-04-01

    In this paper, we develop variational integrators for the nonequilibrium thermodynamics of simple closed systems. These integrators are obtained by a discretization of the Lagrangian variational formulation of nonequilibrium thermodynamics developed in (Gay-Balmaz and Yoshimura 2017a J. Geom. Phys. part I 111 169–93 Gay-Balmaz and Yoshimura 2017b J. Geom. Phys. part II 111 194–212) and thus extend the variational integrators of Lagrangian mechanics, to include irreversible processes. In the continuous setting, we derive the structure preserving property of the flow of such systems. This property is an extension of the symplectic property of the flow of the Euler–Lagrange equations. In the discrete setting, we show that the discrete flow solution of our numerical scheme verifies a discrete version of this property. We also present the regularity conditions which ensure the existence of the discrete flow. We finally illustrate our discrete variational schemes with the implementation of an example of a simple and closed system.

  17. Model-independent particle accelerator tuning

    DOE PAGES

    Scheinker, Alexander; Pang, Xiaoying; Rybarcyk, Larry

    2013-10-21

    We present a new model-independent dynamic feedback technique, rotation rate tuning, for automatically and simultaneously tuning coupled components of uncertain, complex systems. The main advantages of the method are: 1) It has the ability to handle unknown, time-varying systems, 2) It gives known bounds on parameter update rates, 3) We give an analytic proof of its convergence and its stability, and 4) It has a simple digital implementation through a control system such as the Experimental Physics and Industrial Control System (EPICS). Because this technique is model independent it may be useful as a real-time, in-hardware, feedback-based optimization scheme formore » uncertain and time-varying systems. In particular, it is robust enough to handle uncertainty due to coupling, thermal cycling, misalignments, and manufacturing imperfections. As a result, it may be used as a fine-tuning supplement for existing accelerator tuning/control schemes. We present multi-particle simulation results demonstrating the scheme’s ability to simultaneously adaptively adjust the set points of twenty two quadrupole magnets and two RF buncher cavities in the Los Alamos Neutron Science Center Linear Accelerator’s transport region, while the beam properties and RF phase shift are continuously varying. The tuning is based only on beam current readings, without knowledge of particle dynamics. We also present an outline of how to implement this general scheme in software for optimization, and in hardware for feedback-based control/tuning, for a wide range of systems.« less

  18. Free web-based modelling platform for managed aquifer recharge (MAR) applications

    NASA Astrophysics Data System (ADS)

    Stefan, Catalin; Junghanns, Ralf; Glaß, Jana; Sallwey, Jana; Fatkhutdinov, Aybulat; Fichtner, Thomas; Barquero, Felix; Moreno, Miguel; Bonilla, José; Kwoyiga, Lydia

    2017-04-01

    Managed aquifer recharge represents a valuable instrument for sustainable water resources management. The concept implies purposeful infiltration of surface water into underground for later recovery or environmental benefits. Over decades, MAR schemes were successfully installed worldwide for a variety of reasons: to maximize the natural storage capacity of aquifers, physical aquifer management, water quality management, and ecological benefits. The INOWAS-DSS platform provides a collection of free web-based tools for planning, management and optimization of main components of MAR schemes. The tools are grouped into 13 specific applications that cover most relevant challenges encountered at MAR sites, both from quantitative and qualitative perspectives. The applications include among others the optimization of MAR site location, the assessment of saltwater intrusion, the restoration of groundwater levels in overexploited aquifers, the maximization of natural storage capacity of aquifers, the improvement of water quality, the design and operational optimization of MAR schemes, clogging development and risk assessment. The platform contains a collection of about 35 web-based tools of various degrees of complexity, which are either included in application specific workflows or used as standalone modelling instruments. Among them are simple tools derived from data mining and empirical equations, analytical groundwater related equations, as well as complex numerical flow and transport models (MODFLOW, MT3DMS and SEAWAT). Up to now, the simulation core of the INOWAS-DSS, which is based on the finite differences groundwater flow model MODFLOW, is implemented and runs on the web. A scenario analyser helps to easily set up and evaluate new management options as well as future development such as land use and climate change and compare them to previous scenarios. Additionally simple tools such as analytical equations to assess saltwater intrusion are already running online. Besides the simulation tools, a web-based data base is under development where geospatial and time series data can be stored, managed, and processed. Furthermore, a web-based information system containing user guides for the various developed tools and applications as well as basic information on MAR and related topics is published and will be regularly expanded as new tools are getting implemented. The INOWAS-DSS including its simulation tools, data base and information system provides an extensive framework to manage, plan and optimize MAR facilities. As the INOWAS-DSS is an open-source software accessible via the internet using standard web browsers, it offers new ways for data sharing and collaboration among various partners and decision makers.

  19. The Semantic Management of Environmental Resources within the Interoperable Context of the EuroGEOSS: Alignment of GEMET and the GEOSS SBAs

    NASA Astrophysics Data System (ADS)

    Cialone, Claudia; Stock, Kristin

    2010-05-01

    EuroGEOSS is a European Commission funded project. It aims at improving a scientific understanding of the complex mechanisms which drive changes affecting our planet, identifying and establishing interoperable arrangements between environmental information systems. These systems would be sustained and operated by organizations with a clear mandate and resources and rendered available following the specifications of already existent frameworks such as GEOSS (the Global Earth Observation System of systems)1 and INSPIRE (the Infrastructure for Spatial Information in the European Community)2. The EuroGEOSS project's infrastructure focuses on three thematic areas: forestry, drought and biodiversity. One of the important activities in the project is the retrieval, parsing and harmonization of the large amount of heterogeneous environmental data available at local, regional and global levels between these strategic areas. The challenge is to render it semantically and technically interoperable in a simple way. An initial step in achieving this semantic and technical interoperability involves the selection of appropriate classification schemes (for example, thesauri, ontologies and controlled vocabularies) to describe the resources in the EuroGEOSS framework. These classifications become a crucial part of the interoperable framework scaffolding because they allow data providers to describe their resources and thus support resource discovery, execution and orchestration of varying levels of complexity. However, at present, given the diverse range of environmental thesauri, controlled vocabularies and ontologies and the large number of resources provided by project participants, the selection of appropriate classification schemes involves a number of considerations. First of all, there is the semantic difficulty of selecting classification schemes that contain concepts that are relevant to each thematic area. Secondly, EuroGEOSS is intended to accommodate a number of existing environmental projects (for example, GEOSS and INSPIRE). This requirement imposes constraints on the selection. Thirdly, the selected classification scheme or group of schemes (if more than one) must be capable of alignment (establishing different kinds of mappings between concepts, hence preserving intact the original knowledge schemes) or merging (the creation of another unique ontology from the original ontological sources) (Pérez-Gómez et al., 2004). Last but not least, there is the issue of including multi-lingual schemes that are based on free, open standards (non-proprietary). Using these selection criteria, we aim to support open and convenient data discovery and exchange for users who speak different languages (particularly the European ones for the broad scopes of EuroGEOSS). In order to support the project, we have developed a solution that employs two classification schemes: the Societal Benefit Areas (SBAs)3: the upper-level environmental categorization developed for the GEOSS project and the GEneral Multilingual Environmental Thesaurus (GEMET)4: a general environmental thesaurus whose conceptual structure has already been integrated with the spatial data themes proposed by the INSPIRE project. The former seems to provide the spatial data keywords relevant to the INSPIRE's Directive (JRC, 2008). In this way, we provide users with a basic set of concepts to support resource description and discovery in the thematic areas while supporting the requirements of INSPIRE and GEOSS. Furthermore, the use of only two classification schemes together with the fact that the SBAs are very general categories while GEMET includes much more detailed, yet still top-level, concepts, makes alignment an achievable task. Alignment was selected over merging because it leaves the existing classification schemes intact and requires only a simple activity of defining mappings from GEMET to the SBAs. In order to accomplish this task we are developing a simple, automated, open-source application to assist thematic experts in defining the mappings between concepts in the two classification schemes. The application will then generate SKOS mappings (exactMatch, closeMatch, broadMatch, narrowMatch, relatedMatch) based on thematic expert selections between the concepts in GEMET with the SBAs (including both the general Societal Benefit Areas and their subcategories). Once these mappings are defined and the SKOS files generated, resource providers will be able to select concepts from either GEMET or the SBAs (or a mixture) to describe their resources, and discovery approaches will support selection of concepts from either classification scheme, also returning results classified using the other scheme. While the focus of our work has been on the SBAs and GEMET, we also plan to provide a method for resource providers to further extend the semantic infrastructure by defining alignments to new classification schemes if these are required to support particular specialized thematic areas that are not covered by GEMET. In this way, the approach is flexible and suited to the general scope of EuroGEOSS, allowing specialists to increase at will the level of semantic quality and specificity of data to the initial infrastructural skeleton of the project. References ____________________________________________ Joint research Centre (JRC), 2008. INSPIRE Metadata Editor User Guide Pérez-Gómez A., Fernandez-Lopez M., Corcho O. Ontological engineering: With Examples from the Areas of Knowledge Management, e-Commerce and the Semantic Web.Spinger: London, 2004

  20. Proactive Time-Rearrangement Scheme for Multi-Radio Collocated Platform

    NASA Astrophysics Data System (ADS)

    Kim, Chul; Shin, Sang-Heon; Park, Sang Kyu

    We present a simple proactive time rearrangement scheme (PATRA) that reduces the interferences from multi-radio devices equipped in one platform and guarantees user-conceived QoS. Simulation results show that the interference among multiple radios in one platform causes severe performance degradation and cannot guarantee the user requested QoS. However, the PATRA can dramatically improve not only the userconceived QoS but also the overall network throughput.

  1. A Generalized Information Theoretical Model for Quantum Secret Sharing

    NASA Astrophysics Data System (ADS)

    Bai, Chen-Ming; Li, Zhi-Hui; Xu, Ting-Ting; Li, Yong-Ming

    2016-11-01

    An information theoretical model for quantum secret sharing was introduced by H. Imai et al. (Quantum Inf. Comput. 5(1), 69-80 2005), which was analyzed by quantum information theory. In this paper, we analyze this information theoretical model using the properties of the quantum access structure. By the analysis we propose a generalized model definition for the quantum secret sharing schemes. In our model, there are more quantum access structures which can be realized by our generalized quantum secret sharing schemes than those of the previous one. In addition, we also analyse two kinds of important quantum access structures to illustrate the existence and rationality for the generalized quantum secret sharing schemes and consider the security of the scheme by simple examples.

  2. Two-dimensional atmospheric transport and chemistry model - Numerical experiments with a new advection algorithm

    NASA Technical Reports Server (NTRS)

    Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.

    1990-01-01

    Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.

  3. Modulation limit of semiconductor lasers by some parametric modulation schemes

    NASA Astrophysics Data System (ADS)

    Iga, K.

    1985-07-01

    Using the simple rate equations and small signal analysis, the modulation speed limit of semiconductor lasers with modulation schemes such as gain switching, modulation of nonradiative recombination lifetime of minority carriers, and cavity Q modulation, is calculated and compared with the injection modulation scheme of Ikegami and Suematsu (1968). It is found that the maximum modulation frequency for the gain and Q modulation can exceed the resonance-like frequency by a factor equal to the coefficient of the time derivative of the modulation parameter, though the nonradiative lifetime modulation is not shown to be different from the injection modulation. A solution for the carrier lifetime modulation of LED is obtained, and the possibility of wideband modulation in this scheme is demonstrated.

  4. Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2015-11-01

    The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.

  5. Lossless compression of grayscale medical images: effectiveness of traditional and state-of-the-art approaches

    NASA Astrophysics Data System (ADS)

    Clunie, David A.

    2000-05-01

    Proprietary compression schemes have a cost and risk associated with their support, end of life and interoperability. Standards reduce this cost and risk. The new JPEG-LS process (ISO/IEC 14495-1), and the lossless mode of the proposed JPEG 2000 scheme (ISO/IEC CD15444-1), new standard schemes that may be incorporated into DICOM, are evaluated here. Three thousand, six hundred and seventy-nine (3,679) single frame grayscale images from multiple anatomical regions, modalities and vendors, were tested. For all images combined JPEG-LS and JPEG 2000 performed equally well (3.81), almost as well as CALIC (3.91), a complex predictive scheme used only as a benchmark. Both out-performed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM). Text dictionary schemes performed poorly (gzip 2.38), as did image dictionary schemes without statistical modeling (PNG 2.76). Proprietary transform based schemes did not perform as well as JPEG-LS or JPEG 2000 (S+P Arithmetic 3.4, CREW 3.56). Stratified by modality, JPEG-LS compressed CT images (4.00), MR (3.59), NM (5.98), US (3.4), IO (2.66), CR (3.64), DX (2.43), and MG (2.62). CALIC always achieved the highest compression except for one modality for which JPEG-LS did better (MG digital vendor A JPEG-LS 4.02, CALIC 4.01). JPEG-LS outperformed existing JPEG for all modalities. The use of standard schemes can achieve state of the art performance, regardless of modality, JPEG-LS is simple, easy to implement, consumes less memory, and is faster than JPEG 2000, though JPEG 2000 will offer lossy and progressive transmission. It is recommended that DICOM add transfer syntaxes for both JPEG-LS and JPEG 2000.

  6. Quadrature demultiplexing using a degenerate vector parametric amplifier.

    PubMed

    Lorences-Riesgo, Abel; Liu, Lan; Olsson, Samuel L I; Malik, Rohit; Kumpera, Aleš; Lundström, Carl; Radic, Stojan; Karlsson, Magnus; Andrekson, Peter A

    2014-12-01

    We report on quadrature demultiplexing of a quadrature phase-shift keying (QPSK) signal into two cross-polarized binary phase-shift keying (BPSK) signals with negligible penalty at bit-error rate (BER) equal to 10(-9). The all-optical quadrature demultiplexing is achieved using a degenerate vector parametric amplifier operating in phase-insensitive mode. We also propose and demonstrate the use of a novel and simple phase-locked loop (PLL) scheme based on detecting the envelope of one of the signals after demultiplexing in order to achieve stable quadrature decomposition.

  7. Electro-optic analyzer of angular momentum hyperentanglement

    PubMed Central

    Wu, Ziwen; Chen, Lixiang

    2016-01-01

    Characterizing a high-dimensional entanglement is fundamental in quantum information applications. Here, we propose a theoretical scheme to analyze and characterize the angular momentum hyperentanglement that two photons are entangled simultaneously in spin and orbital angular momentum. Based on the electro-optic sampling with a proposed hyper-entanglement analyzer and the simple matrix operation using Cramer rule, our simulations show that it is possible to retrieve effectively both the information about the degree of polarization entanglement and the spiral spectrum of high-dimensional orbital angular momentum entanglement. PMID:26911530

  8. Gas flow calculation method of a ramjet engine

    NASA Astrophysics Data System (ADS)

    Kostyushin, Kirill; Kagenov, Anuar; Eremin, Ivan; Zhiltsov, Konstantin; Shuvarikov, Vladimir

    2017-11-01

    At the present study calculation methodology of gas dynamics equations in ramjet engine is presented. The algorithm is based on Godunov`s scheme. For realization of calculation algorithm, the system of data storage is offered, the system does not depend on mesh topology, and it allows using the computational meshes with arbitrary number of cell faces. The algorithm of building a block-structured grid is given. Calculation algorithm in the software package "FlashFlow" is implemented. Software package is verified on the calculations of simple configurations of air intakes and scramjet models.

  9. Analysis and control of the METC fluid bed gasifier. Quarterly progress report, January--March 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-03-01

    This document summarizes work performed for the period 10/1/94 to 3/31/95. In this work, three components will form the basis for design of a control scheme for the Fluidized Bed Gasifier (FBG) at METC: (1) a control systems analysis based on simple linear models derived from process data, (2) review of the literature on fluid bed gasifier operation and control, and (3) understanding of present FBG operation and real world considerations. Below we summarize work accomplished to data in each of these areas.

  10. Preparation of an exponentially rising optical pulse for efficient excitation of single atoms in free space.

    PubMed

    Dao, Hoang Lan; Aljunid, Syed Abdullah; Maslennikov, Gleb; Kurtsiefer, Christian

    2012-08-01

    We report on a simple method to prepare optical pulses with exponentially rising envelope on the time scale of a few ns. The scheme is based on the exponential transfer function of a fast transistor, which generates an exponentially rising envelope that is transferred first on a radio frequency carrier, and then on a coherent cw laser beam with an electro-optical phase modulator. The temporally shaped sideband is then extracted with an optical resonator and can be used to efficiently excite a single (87)Rb atom.

  11. Near-complete teleportation of a superposed coherent state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheong, Yong Wook; Kim, Hyunjae; Lee, Hai-Woong

    2004-09-01

    The four Bell-type entangled coherent states, {alpha}>-{alpha}>{+-}-{alpha}>{alpha}> and {alpha}>{alpha}>{+-}-{alpha}>-{alpha}>, can be discriminated with a high probability using only linear optical means, as long as {alpha} is not too small. Based on this observation, we propose a simple scheme to almost completely teleport a superposed coherent state. The nonunitary transformation that is required to complete the teleportation can be achieved by embedding the receiver's field state in a larger Hilbert space consisting of the field and a single atom and performing a unitary transformation on this Hilbert space00.

  12. An efficient computer based wavelets approximation method to solve Fuzzy boundary value differential equations

    NASA Astrophysics Data System (ADS)

    Alam Khan, Najeeb; Razzaq, Oyoon Abdul

    2016-03-01

    In the present work a wavelets approximation method is employed to solve fuzzy boundary value differential equations (FBVDEs). Essentially, a truncated Legendre wavelets series together with the Legendre wavelets operational matrix of derivative are utilized to convert FB- VDE into a simple computational problem by reducing it into a system of fuzzy algebraic linear equations. The capability of scheme is investigated on second order FB- VDE considered under generalized H-differentiability. Solutions are represented graphically showing competency and accuracy of this method.

  13. Electrical motor/generator drive apparatus and method

    DOEpatents

    Su, Gui Jia

    2013-02-12

    The present disclosure includes electrical motor/generator drive systems and methods that significantly reduce inverter direct-current (DC) bus ripple currents and thus the volume and cost of a capacitor. The drive methodology is based on a segmented drive system that does not add switches or passive components but involves reconfiguring inverter switches and motor stator winding connections in a way that allows the formation of multiple, independent drive units and the use of simple alternated switching and optimized Pulse Width Modulation (PWM) schemes to eliminate or significantly reduce the capacitor ripple current.

  14. Description and validation of the Simple, Efficient, Dynamic, Global, Ecological Simulator (SEDGES v.1.0)

    NASA Astrophysics Data System (ADS)

    Paiewonsky, Pablo; Elison Timm, Oliver

    2018-03-01

    In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.

  15. Radar-derived Quantitative Precipitation Estimation in Complex Terrain over the Eastern Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Gou, Y.

    2017-12-01

    Quantitative Precipitation Estimation (QPE) is one of the important applications of weather radars. However, in complex terrain such as Tibetan Plateau, it is a challenging task to obtain an optimal Z-R relation due to the complex space time variability in precipitation microphysics. This paper develops two radar QPE schemes respectively based on Reflectivity Threshold (RT) and Storm Cell Identification and Tracking (SCIT) algorithms using observations from 11 Doppler weather radars and 3294 rain gauges over the Eastern Tibetan Plateau (ETP). These two QPE methodologies are evaluated extensively using four precipitation events that are characterized by different meteorological features. Precipitation characteristics of independent storm cells associated with these four events, as well as the storm-scale differences, are investigated using short-term vertical profiles of reflectivity clusters. Evaluation results show that the SCIT-based rainfall approach performs better than the simple RT-based method in all precipitation events in terms of score comparison using validation gauge measurements as references, with higher correlation (than 75.74%), lower mean absolute error (than 82.38%) and root-mean-square error (than 89.04%) of all the comparative frames. It is also found that the SCIT-based approach can effectively mitigate the radar QPE local error and represent precipitation spatiotemporal variability better than RT-based scheme.

  16. Development of smart piezoelectric transducer self-sensing, self-diagnosis and tuning schemes for structural health monitoring applications

    NASA Astrophysics Data System (ADS)

    Lee, Sang Jun

    Autonomous structural health monitoring (SHM) systems using active sensing devices have been studied extensively to diagnose the current state of aerospace, civil infrastructure and mechanical systems in near real-time and aims to eventually reduce life-cycle costs by replacing current schedule-based maintenance with condition-based maintenance. This research develops four schemes for SHM applications: (1) a simple and reliable PZT transducer self-sensing scheme; (2) a smart PZT self-diagnosis scheme; (3) an instantaneous reciprocity-based PZT diagnosis scheme; and (4) an effective PZT transducer tuning scheme. First, this research develops a PZT transducer self-sensing scheme, which is a necessary condition to accomplish a PZT transducer self-diagnosis. Main advantages of the proposed self-sensing approach are its simplicity and adaptability. The necessary hardware is only an additional self-sensing circuit which includes a minimum of electric components. With this circuit, the self-sensing parameters can be calibrated instantaneously in the presence of changing operational and environmental conditions of the system. In particular, this self-sensing scheme focuses on estimating the mechanical response in the time domain for the subsequent applications of the PZT transducer self-diagnosis and tuning with guided wave propagation. The most significant challenge of this self-sensing comes from the fact that the magnitude of the mechanical response is generally several orders of magnitude smaller than that of the input signal. The proposed self-sensing scheme fully takes advantage of the fact that any user-defined input signals can be applied to a host structure and the input waveform is known. The performance of the proposed self-sensing scheme is demonstrated by theoretical analysis, numerical simulations and various experiments. Second, this research proposes a smart PZT transducer self-diagnosis scheme based on the developed self-sensing scheme. Conventionally, the capacitance change of the PZT wafer is monitored to identify the abnormal PZT condition because the capacitance of the PZT wafer is linearly proportional to its size and also related to the bonding condition. However, temperature variation is another primary factor that affects the PZT capacitance. To ensure the reliable transducer self-diagnosis, two different self-diagnosis features are proposed to differentiate two main PZT wafer defects, i.e., PZT debonding and PZT cracking, from temperature variations and structural damages. The PZT debonding is identified using two indices based on time reversal process (TRP) without any baseline data. Also, the PZT cracking is identified by monitoring the change of the generated Lamb wave power ratio index with respect to the driving frequency. The uniqueness of this self-diagnosis scheme is that the self-diagnosis features can differentiate the PZT defects from environmental variations and structural damages. Therefore, it is expected to minimize false-alarms which are induced by operational or environmental variations as well as structural damages. The applicability of the proposed self-diagnosis scheme is verified by theoretical analysis, numerical simulations, and experimental tests. Third, a new methodology of guided wave-based PZT transducer diagnosis is developed to identify PZT transducer defects without using prior baseline data. This methodology can be applied when a number of same-size PZT transducers are attached to a target structure to form a sensor network. The advantage of the proposed technique is that abnormal PZT transducers among intact PZT transducers can be detected even when the system being monitored is subjected to varying operational and environmental conditions or changing structural conditions. To achieve this goal, the proposed diagnosis technique utilizes the linear reciprocity of guided wave propagation between a pair of surface-bonded PZT transducers. Finally, a PZT transducer tuning scheme is being developed for selective Lamb wave excitation and sensing. This is useful for structural damage detection based on Lamb wave propagation because the proper transducer size and the corresponding input frequency can be is crucial for selective Lamb wave excitation and sensing. The circular PZT response model is derived, and the energy balance is included for a better prediction of the PZT responses because the existing PZT response models do not consider any energy balance between Lamb wave modes. In addition, two calibration methods are also suggested in order to model the PZT responses more accurately by considering a bonding layer effect. (Abstract shortened by UMI.)

  17. An Accurate Non-Cooperative Method for Measuring Textureless Spherical Target Based on Calibrated Lasers.

    PubMed

    Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning

    2016-12-09

    Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time.

  18. Lax-Friedrichs sweeping scheme for static Hamilton-Jacobi equations

    NASA Astrophysics Data System (ADS)

    Kao, Chiu Yen; Osher, Stanley; Qian, Jianliang

    2004-05-01

    We propose a simple, fast sweeping method based on the Lax-Friedrichs monotone numerical Hamiltonian to approximate viscosity solutions of arbitrary static Hamilton-Jacobi equations in any number of spatial dimensions. By using the Lax-Friedrichs numerical Hamiltonian, we can easily obtain the solution at a specific grid point in terms of its neighbors, so that a Gauss-Seidel type nonlinear iterative method can be utilized. Furthermore, by incorporating a group-wise causality principle into the Gauss-Seidel iteration by following a finite group of characteristics, we have an easy-to-implement, sweeping-type, and fast convergent numerical method. However, unlike other methods based on the Godunov numerical Hamiltonian, some computational boundary conditions are needed in the implementation. We give a simple recipe which enforces a version of discrete min-max principle. Some convergence analysis is done for the one-dimensional eikonal equation. Extensive 2-D and 3-D numerical examples illustrate the efficiency and accuracy of the new approach. To our knowledge, this is the first fast numerical method based on discretizing the Hamilton-Jacobi equation directly without assuming convexity and/or homogeneity of the Hamiltonian.

  19. An Accurate Non-Cooperative Method for Measuring Textureless Spherical Target Based on Calibrated Lasers

    PubMed Central

    Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning

    2016-01-01

    Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time. PMID:27941705

  20. A Round-Efficient Authenticated Key Agreement Scheme Based on Extended Chaotic Maps for Group Cloud Meeting.

    PubMed

    Lin, Tsung-Hung; Tsung, Chen-Kun; Lee, Tian-Fu; Wang, Zeng-Bo

    2017-12-03

    The security is a critical issue for business purposes. For example, the cloud meeting must consider strong security to maintain the communication privacy. Considering the scenario with cloud meeting, we apply extended chaotic map to present passwordless group authentication key agreement, termed as Passwordless Group Authentication Key Agreement (PL-GAKA). PL-GAKA improves the computation efficiency for the simple group password-based authenticated key agreement (SGPAKE) proposed by Lee et al. in terms of computing the session key. Since the extended chaotic map has equivalent security level to the Diffie-Hellman key exchange scheme applied by SGPAKE, the security of PL-GAKA is not sacrificed when improving the computation efficiency. Moreover, PL-GAKA is a passwordless scheme, so the password maintenance is not necessary. Short-term authentication is considered, hence the communication security is stronger than other protocols by dynamically generating session key in each cloud meeting. In our analysis, we first prove that each meeting member can get the correct information during the meeting. We analyze common security issues for the proposed PL-GAKA in terms of session key security, mutual authentication, perfect forward security, and data integrity. Moreover, we also demonstrate that communicating in PL-GAKA is secure when suffering replay attacks, impersonation attacks, privileged insider attacks, and stolen-verifier attacks. Eventually, an overall comparison is given to show the performance between PL-GAKA, SGPAKE and related solutions.

  1. Jacobian-free approximate solvers for hyperbolic systems: Application to relativistic magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Castro, Manuel J.; Gallardo, José M.; Marquina, Antonio

    2017-10-01

    We present recent advances in PVM (Polynomial Viscosity Matrix) methods based on internal approximations to the absolute value function, and compare them with Chebyshev-based PVM solvers. These solvers only require a bound on the maximum wave speed, so no spectral decomposition is needed. Another important feature of the proposed methods is that they are suitable to be written in Jacobian-free form, in which only evaluations of the physical flux are used. This is particularly interesting when considering systems for which the Jacobians involve complex expressions, e.g., the relativistic magnetohydrodynamics (RMHD) equations. On the other hand, the proposed Jacobian-free solvers have also been extended to the case of approximate DOT (Dumbser-Osher-Toro) methods, which can be regarded as simple and efficient approximations to the classical Osher-Solomon method, sharing most of it interesting features and being applicable to general hyperbolic systems. To test the properties of our schemes a number of numerical experiments involving the RMHD equations are presented, both in one and two dimensions. The obtained results are in good agreement with those found in the literature and show that our schemes are robust and accurate, running stable under a satisfactory time step restriction. It is worth emphasizing that, although this work focuses on RMHD, the proposed schemes are suitable to be applied to general hyperbolic systems.

  2. Emergency material allocation with time-varying supply-demand based on dynamic optimization method for river chemical spills.

    PubMed

    Liu, Jie; Guo, Liang; Jiang, Jiping; Jiang, Dexun; Wang, Peng

    2018-04-13

    Aiming to minimize the damage caused by river chemical spills, efficient emergency material allocation is critical for an actual emergency rescue decision-making in a quick response. In this study, an emergency material allocation framework based on time-varying supply-demand constraint is developed to allocate emergency material, minimize the emergency response time, and satisfy the dynamic emergency material requirements in post-accident phases dealing with river chemical spills. In this study, the theoretically critical emergency response time is firstly obtained for the emergency material allocation system to select a series of appropriate emergency material warehouses as potential supportive centers. Then, an enumeration method is applied to identify the practically critical emergency response time, the optimum emergency material allocation and replenishment scheme. Finally, the developed framework is applied to a computational experiment based on south-to-north water transfer project in China. The results illustrate that the proposed methodology is a simple and flexible tool for appropriately allocating emergency material to satisfy time-dynamic demands during emergency decision-making. Therefore, the decision-makers can identify an appropriate emergency material allocation scheme in a balance between time-effective and cost-effective objectives under the different emergency pollution conditions.

  3. Phase-unwrapping algorithm by a rounding-least-squares approach

    NASA Astrophysics Data System (ADS)

    Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin

    2014-02-01

    A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.

  4. Joint three-dimensional inversion of coupled groundwater flow and heat transfer based on automatic differentiation: sensitivity calculation, verification, and synthetic examples

    NASA Astrophysics Data System (ADS)

    Rath, V.; Wolf, A.; Bücker, H. M.

    2006-10-01

    Inverse methods are useful tools not only for deriving estimates of unknown parameters of the subsurface, but also for appraisal of the thus obtained models. While not being neither the most general nor the most efficient methods, Bayesian inversion based on the calculation of the Jacobian of a given forward model can be used to evaluate many quantities useful in this process. The calculation of the Jacobian, however, is computationally expensive and, if done by divided differences, prone to truncation error. Here, automatic differentiation can be used to produce derivative code by source transformation of an existing forward model. We describe this process for a coupled fluid flow and heat transport finite difference code, which is used in a Bayesian inverse scheme to estimate thermal and hydraulic properties and boundary conditions form measured hydraulic potentials and temperatures. The resulting derivative code was validated by comparison to simple analytical solutions and divided differences. Synthetic examples from different flow regimes demonstrate the use of the inverse scheme, and its behaviour in different configurations.

  5. Distributed strain measurement based on long-gauge FBG and delayed transmission/reflection ratiometric reflectometry for dynamic structural deformation monitoring.

    PubMed

    Nishiyama, Michiko; Igawa, Hirotaka; Kasai, Tokio; Watanabe, Naoyuki

    2015-02-10

    In this paper, we propose a delayed transmission/reflection ratiometric reflectometry (DTR(3)) scheme using a long-gauge fiber Bragg grating (FBG), which can be used for dynamic structural deformation monitoring of structures of between a few to tens of meters in length, such as airplane wings and helicopter blades. FBG sensors used for multipoint sensing generally employ wavelength division multiplexing techniques utilizing several Bragg central wavelengths; by contrast, the DTR(3) interrogator uses a continuous pulse array based on a pseudorandom number code and a long-gauge FBG utilizing a single Bragg wavelength and composed of simple hardware devices. The DTR(3) scheme can detect distributed strain at a 50 cm spatial resolution using a long-gauge FBG with a 100 Hz sampling rate. We evaluated the strain sensing characteristics of the long-gauge FBG when attached to a 2.5 m aluminum bar and a 5.5 m helicopter blade model, determining these structure natural frequencies in free vibration tests and their distributed strain characteristics in static tests.

  6. New Approaches to Coding Information using Inverse Scattering Transform

    NASA Astrophysics Data System (ADS)

    Frumin, L. L.; Gelash, A. A.; Turitsyn, S. K.

    2017-06-01

    Remarkable mathematical properties of the integrable nonlinear Schrödinger equation (NLSE) can offer advanced solutions for the mitigation of nonlinear signal distortions in optical fiber links. Fundamental optical soliton, continuous, and discrete eigenvalues of the nonlinear spectrum have already been considered for the transmission of information in fiber-optic channels. Here, we propose to apply signal modulation to the kernel of the Gelfand-Levitan-Marchenko equations that offers the advantage of a relatively simple decoder design. First, we describe an approach based on exploiting the general N -soliton solution of the NLSE for simultaneous coding of N symbols involving 4 ×N coding parameters. As a specific elegant subclass of the general schemes, we introduce a soliton orthogonal frequency division multiplexing (SOFDM) method. This method is based on the choice of identical imaginary parts of the N -soliton solution eigenvalues, corresponding to equidistant soliton frequencies, making it similar to the conventional OFDM scheme, thus, allowing for the use of the efficient fast Fourier transform algorithm to recover the data. Then, we demonstrate how to use this new approach to control signal parameters in the case of the continuous spectrum.

  7. Error Reduction Program. [combustor performance evaluation codes

    NASA Technical Reports Server (NTRS)

    Syed, S. A.; Chiappetta, L. M.; Gosman, A. D.

    1985-01-01

    The details of a study to select, incorporate and evaluate the best available finite difference scheme to reduce numerical error in combustor performance evaluation codes are described. The combustor performance computer programs chosen were the two dimensional and three dimensional versions of Pratt & Whitney's TEACH code. The criteria used to select schemes required that the difference equations mirror the properties of the governing differential equation, be more accurate than the current hybrid difference scheme, be stable and economical, be compatible with TEACH codes, use only modest amounts of additional storage, and be relatively simple. The methods of assessment used in the selection process consisted of examination of the difference equation, evaluation of the properties of the coefficient matrix, Taylor series analysis, and performance on model problems. Five schemes from the literature and three schemes developed during the course of the study were evaluated. This effort resulted in the incorporation of a scheme in 3D-TEACH which is usuallly more accurate than the hybrid differencing method and never less accurate.

  8. Evaluating motives: Two simple tests to identify and avoid entanglement in legally dubious urine drug testing schemes.

    PubMed

    Barnes, Michael C; Worthy, Stacey L

    2015-01-01

    This article educates healthcare practitioners on the legal framework prohibiting abusive practices in urine drug testing (UDT) in medical settings, discusses several profit-driven UDT schemes that have resulted in enforcement actions, and provides recommendations for best practices in UDT to comply with state and federal fraud and anti-kickback statutes. The authors carefully reviewed and analyzed statutes, regulations, adivsory opinions, case law, court documents, articles from legal journals, and news articles. Certain facts-driven UDT arrangements tend to violate federal and state healthcare laws and regulations, including Stark law, the anti-kickback statute, the criminal health care fraud statute, and the False Claims Act. Healthcare practitioners who use UDT can help ensure that they are in compliance with applicable federal and state laws by evaluating whether their actions are motivated by providing proper care to their patients rather than by profits. They must avoid schemes that violate the spirit of the law while appearing to comply with the letter of the law. Such a simple self-evaluation of motive can reduce a practitioner's likelihood of civil fines and criminal liability.

  9. High order parallel numerical schemes for solving incompressible flows

    NASA Technical Reports Server (NTRS)

    Lin, Avi; Milner, Edward J.; Liou, May-Fun; Belch, Richard A.

    1992-01-01

    The use of parallel computers for numerically solving flow fields has gained much importance in recent years. This paper introduces a new high order numerical scheme for computational fluid dynamics (CFD) specifically designed for parallel computational environments. A distributed MIMD system gives the flexibility of treating different elements of the governing equations with totally different numerical schemes in different regions of the flow field. The parallel decomposition of the governing operator to be solved is the primary parallel split. The primary parallel split was studied using a hypercube like architecture having clusters of shared memory processors at each node. The approach is demonstrated using examples of simple steady state incompressible flows. Future studies should investigate the secondary split because, depending on the numerical scheme that each of the processors applies and the nature of the flow in the specific subdomain, it may be possible for a processor to seek better, or higher order, schemes for its particular subcase.

  10. VLSI Technology for Cognitive Radio

    NASA Astrophysics Data System (ADS)

    VIJAYALAKSHMI, B.; SIDDAIAH, P.

    2017-08-01

    One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.

  11. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    PubMed Central

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-01-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol−1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol−1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning. PMID:24320250

  12. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: an accurate correction scheme for electrostatic finite-size effects.

    PubMed

    Rocklin, Gabriel J; Mobley, David L; Dill, Ken A; Hünenberger, Philippe H

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol(-1)) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol(-1)). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.

  13. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    NASA Astrophysics Data System (ADS)

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-11-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol-1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.

  14. Checklist and Simple Identification Key for Frogs and Toads from District IV of The MADA Scheme, Kedah, Malaysia

    PubMed Central

    Jaafar, Ibrahim; Chai, Teoh Chia; Sah, Shahrul Anuar Mohd; Akil, Mohd Abdul Muin Md.

    2009-01-01

    A survey was conducted to catalogue the diversity of anurans in District IV of the Muda Agriculture Development Authority Scheme (MADA) in Kedah Darul Aman, Malaysia, from July 1996 to January 1997. Eight species of anurans from three families were present in the study area. Of these, the Common Grass Frog (Fejevarya limnocharis) was the most abundant, followed by Mangrove Frog (Fejevarya cancrivora), Long-legged Frog (Hylarana macrodactyla), and Common Toad (Duttaphrynus melanostictus). Puddle Frog (Occidozyga lima), Taiwanese Giant Frog (Hoplobatrachus rugulosus), and Banded Bullfrog (Kaluola pulchra) were rare during the sampling period, and only one Paddy Frog (Hylarana erythraea) was captured. A simple identification key for the anurans of this area is included for use by scientists and laymen alike. PMID:24575178

  15. An On-Demand Emergency Packet Transmission Scheme for Wireless Body Area Networks.

    PubMed

    Al Ameen, Moshaddique; Hong, Choong Seon

    2015-12-04

    The rapid developments of sensor devices that can actively monitor human activities have given rise to a new field called wireless body area network (BAN). A BAN can manage devices in, on and around the human body. Major requirements of such a network are energy efficiency, long lifetime, low delay, security, etc. Traffic in a BAN can be scheduled (normal) or event-driven (emergency). Traditional media access control (MAC) protocols use duty cycling to improve performance. A sleep-wake up cycle is employed to save energy. However, this mechanism lacks features to handle emergency traffic in a prompt and immediate manner. To deliver an emergency packet, a node has to wait until the receiver is awake. It also suffers from overheads, such as idle listening, overhearing and control packet handshakes. An external radio-triggered wake up mechanism is proposed to handle prompt communication. It can reduce the overheads and improve the performance through an on-demand scheme. In this work, we present a simple-to-implement on-demand packet transmission scheme by taking into considerations the requirements of a BAN. The major concern is handling the event-based emergency traffic. The performance analysis of the proposed scheme is presented. The results showed significant improvements in the overall performance of a BAN compared to state-of-the-art protocols in terms of energy consumption, delay and lifetime.

  16. R&D Incentives for Neglected Diseases

    PubMed Central

    Dimitri, Nicola

    2012-01-01

    Neglected diseases are typically characterized as those for which adequate drug treatment is lacking, and the potential return on effort in research and development (R&D), to produce new therapies, is too small for companies to invest significant resources in the field. In recent years various incentives schemes to stimulate R&D by pharmaceutical firms have been considered. Broadly speaking, these can be classified either as ‘push’ or ‘pull’ programs. Hybrid options, that include push and pull incentives, have also become increasingly popular. Supporters and critics of these various incentive schemes have argued in favor of their relative merits and limitations, although the view that no mechanism is a perfect fit for all situations appears to be widely held. For this reason, the debate on the advantages and disadvantages of different approaches has been important for policy decisions, but is dispersed in a variety of sources. With this in mind, the aim of this paper is to contribute to the understanding of the economic determinants behind R&D investments for neglected diseases by comparing the relative strength of different incentive schemes within a simple economic model, based on the assumption of profit maximizing firms. The analysis suggests that co-funded push programs are generally more efficient than pure pull programs. However, by setting appropriate intermediate goals hybrid incentive schemes could further improve efficiency. PMID:23284648

  17. Analytical and numerical analysis of frictional damage in quasi brittle materials

    NASA Astrophysics Data System (ADS)

    Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.

    2016-07-01

    Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.

  18. Long-distance quantum communication over noisy networks without long-time quantum memory

    NASA Astrophysics Data System (ADS)

    Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Łodyga, Justyna; Pankowski, Łukasz; PrzysieŻna, Anna

    2014-12-01

    The problem of sharing entanglement over large distances is crucial for implementations of quantum cryptography. A possible scheme for long-distance entanglement sharing and quantum communication exploits networks whose nodes share Einstein-Podolsky-Rosen (EPR) pairs. In Perseguers et al. [Phys. Rev. A 78, 062324 (2008), 10.1103/PhysRevA.78.062324] the authors put forward an important isomorphism between storing quantum information in a dimension D and transmission of quantum information in a D +1 -dimensional network. We show that it is possible to obtain long-distance entanglement in a noisy two-dimensional (2D) network, even when taking into account that encoding and decoding of a state is exposed to an error. For 3D networks we propose a simple encoding and decoding scheme based solely on syndrome measurements on 2D Kitaev topological quantum memory. Our procedure constitutes an alternative scheme of state injection that can be used for universal quantum computation on 2D Kitaev code. It is shown that the encoding scheme is equivalent to teleporting the state, from a specific node into a whole two-dimensional network, through some virtual EPR pair existing within the rest of network qubits. We present an analytic lower bound on fidelity of the encoding and decoding procedure, using as our main tool a modified metric on space-time lattice, deviating from a taxicab metric at the first and the last time slices.

  19. An On-Demand Emergency Packet Transmission Scheme for Wireless Body Area Networks

    PubMed Central

    Al Ameen, Moshaddique; Hong, Choong Seon

    2015-01-01

    The rapid developments of sensor devices that can actively monitor human activities have given rise to a new field called wireless body area network (BAN). A BAN can manage devices in, on and around the human body. Major requirements of such a network are energy efficiency, long lifetime, low delay, security, etc. Traffic in a BAN can be scheduled (normal) or event-driven (emergency). Traditional media access control (MAC) protocols use duty cycling to improve performance. A sleep-wake up cycle is employed to save energy. However, this mechanism lacks features to handle emergency traffic in a prompt and immediate manner. To deliver an emergency packet, a node has to wait until the receiver is awake. It also suffers from overheads, such as idle listening, overhearing and control packet handshakes. An external radio-triggered wake up mechanism is proposed to handle prompt communication. It can reduce the overheads and improve the performance through an on-demand scheme. In this work, we present a simple-to-implement on-demand packet transmission scheme by taking into considerations the requirements of a BAN. The major concern is handling the event-based emergency traffic. The performance analysis of the proposed scheme is presented. The results showed significant improvements in the overall performance of a BAN compared to state-of-the-art protocols in terms of energy consumption, delay and lifetime. PMID:26690161

  20. Preparation of Greenberger-Horne-Zeilinger Entangled States in the Atom-Cavity Systems

    NASA Astrophysics Data System (ADS)

    Xu, Nan

    2018-02-01

    We present a new simple scheme for the preparation of Greenberger-Horne-Zeilinger maximally entangled states of two two-level atoms. The distinct feature of the effective Hamiltonian is that there is no energy exchange between the atoms and the cavity.. Thus the scheme is insensitive to the effect of cavity field and the atom radiation.This protocol may be realizable in the realm of current physical experiment.

  1. A simple, robust and efficient high-order accurate shock-capturing scheme for compressible flows: Towards minimalism

    NASA Astrophysics Data System (ADS)

    Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi

    2018-06-01

    Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.

  2. Towards information-optimal simulation of partial differential equations.

    PubMed

    Leike, Reimar H; Enßlin, Torsten A

    2018-03-01

    Most simulation schemes for partial differential equations (PDEs) focus on minimizing a simple error norm of a discretized version of a field. This paper takes a fundamentally different approach; the discretized field is interpreted as data providing information about a real physical field that is unknown. This information is sought to be conserved by the scheme as the field evolves in time. Such an information theoretic approach to simulation was pursued before by information field dynamics (IFD). In this paper we work out the theory of IFD for nonlinear PDEs in a noiseless Gaussian approximation. The result is an action that can be minimized to obtain an information-optimal simulation scheme. It can be brought into a closed form using field operators to calculate the appearing Gaussian integrals. The resulting simulation schemes are tested numerically in two instances for the Burgers equation. Their accuracy surpasses finite-difference schemes on the same resolution. The IFD scheme, however, has to be correctly informed on the subgrid correlation structure. In certain limiting cases we recover well-known simulation schemes like spectral Fourier-Galerkin methods. We discuss implications of the approximations made.

  3. A low-complexity and high performance concatenated coding scheme for high-speed satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Rhee, Dojun; Rajpal, Sandeep

    1993-01-01

    This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.

  4. Communication: Density functional theory embedding with the orthogonality constrained basis set expansion procedure

    NASA Astrophysics Data System (ADS)

    Culpitt, Tanner; Brorsen, Kurt R.; Hammes-Schiffer, Sharon

    2017-06-01

    Density functional theory (DFT) embedding approaches have generated considerable interest in the field of computational chemistry because they enable calculations on larger systems by treating subsystems at different levels of theory. To circumvent the calculation of the non-additive kinetic potential, various projector methods have been developed to ensure the orthogonality of molecular orbitals between subsystems. Herein the orthogonality constrained basis set expansion (OCBSE) procedure is implemented to enforce this subsystem orbital orthogonality without requiring a level shifting parameter. This scheme is a simple alternative to existing parameter-free projector-based schemes, such as the Huzinaga equation. The main advantage of the OCBSE procedure is that excellent convergence behavior is attained for DFT-in-DFT embedding without freezing any of the subsystem densities. For the three chemical systems studied, the level of accuracy is comparable to or higher than that obtained with the Huzinaga scheme with frozen subsystem densities. Allowing both the high-level and low-level DFT densities to respond to each other during DFT-in-DFT embedding calculations provides more flexibility and renders this approach more generally applicable to chemical systems. It could also be useful for future extensions to embedding approaches combining wavefunction theories and DFT.

  5. Interpretation for scales of measurement linking with abstract algebra

    PubMed Central

    2014-01-01

    The Stevens classification of levels of measurement involves four types of scale: “Nominal”, “Ordinal”, “Interval” and “Ratio”. This classification has been used widely in medical fields and has accomplished an important role in composition and interpretation of scale. With this classification, levels of measurements appear organized and validated. However, a group theory-like systematization beckons as an alternative because of its logical consistency and unexceptional applicability in the natural sciences but which may offer great advantages in clinical medicine. According to this viewpoint, the Stevens classification is reformulated within an abstract algebra-like scheme; ‘Abelian modulo additive group’ for “Ordinal scale” accompanied with ‘zero’, ‘Abelian additive group’ for “Interval scale”, and ‘field’ for “Ratio scale”. Furthermore, a vector-like display arranges a mixture of schemes describing the assessment of patient states. With this vector-like notation, data-mining and data-set combination is possible on a higher abstract structure level based upon a hierarchical-cluster form. Using simple examples, we show that operations acting on the corresponding mixed schemes of this display allow for a sophisticated means of classifying, updating, monitoring, and prognosis, where better data mining/data usage and efficacy is expected. PMID:24987515

  6. Architectures for Quantum Simulation Showing a Quantum Speedup

    NASA Astrophysics Data System (ADS)

    Bermejo-Vega, Juan; Hangleiter, Dominik; Schwarz, Martin; Raussendorf, Robert; Eisert, Jens

    2018-04-01

    One of the main aims in the field of quantum simulation is to achieve a quantum speedup, often referred to as "quantum computational supremacy," referring to the experimental realization of a quantum device that computationally outperforms classical computers. In this work, we show that one can devise versatile and feasible schemes of two-dimensional, dynamical, quantum simulators showing such a quantum speedup, building on intermediate problems involving nonadaptive, measurement-based, quantum computation. In each of the schemes, an initial product state is prepared, potentially involving an element of randomness as in disordered models, followed by a short-time evolution under a basic translationally invariant Hamiltonian with simple nearest-neighbor interactions and a mere sampling measurement in a fixed basis. The correctness of the final-state preparation in each scheme is fully efficiently certifiable. We discuss experimental necessities and possible physical architectures, inspired by platforms of cold atoms in optical lattices and a number of others, as well as specific assumptions that enter the complexity-theoretic arguments. This work shows that benchmark settings exhibiting a quantum speedup may require little control, in contrast to universal quantum computing. Thus, our proposal puts a convincing experimental demonstration of a quantum speedup within reach in the near term.

  7. Interpretation for scales of measurement linking with abstract algebra.

    PubMed

    Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun

    2014-01-01

    THE STEVENS CLASSIFICATION OF LEVELS OF MEASUREMENT INVOLVES FOUR TYPES OF SCALE: "Nominal", "Ordinal", "Interval" and "Ratio". This classification has been used widely in medical fields and has accomplished an important role in composition and interpretation of scale. With this classification, levels of measurements appear organized and validated. However, a group theory-like systematization beckons as an alternative because of its logical consistency and unexceptional applicability in the natural sciences but which may offer great advantages in clinical medicine. According to this viewpoint, the Stevens classification is reformulated within an abstract algebra-like scheme; 'Abelian modulo additive group' for "Ordinal scale" accompanied with 'zero', 'Abelian additive group' for "Interval scale", and 'field' for "Ratio scale". Furthermore, a vector-like display arranges a mixture of schemes describing the assessment of patient states. With this vector-like notation, data-mining and data-set combination is possible on a higher abstract structure level based upon a hierarchical-cluster form. Using simple examples, we show that operations acting on the corresponding mixed schemes of this display allow for a sophisticated means of classifying, updating, monitoring, and prognosis, where better data mining/data usage and efficacy is expected.

  8. On numerical instabilities of Godunov-type schemes for strong shocks

    NASA Astrophysics Data System (ADS)

    Xie, Wenjia; Li, Wei; Li, Hua; Tian, Zhengyu; Pan, Sha

    2017-12-01

    It is well known that low diffusion Riemann solvers with minimal smearing on contact and shear waves are vulnerable to shock instability problems, including the carbuncle phenomenon. In the present study, we concentrate on exploring where the instability grows out and how the dissipation inherent in Riemann solvers affects the unstable behaviors. With the help of numerical experiments and a linearized analysis method, it has been found that the shock instability is strongly related to the unstable modes of intermediate states inside the shock structure. The consistency of mass flux across the normal shock is needed for a Riemann solver to capture strong shocks stably. The famous carbuncle phenomenon is interpreted as the consequence of the inconsistency of mass flux across the normal shock for a low diffusion Riemann solver. Based on the results of numerical experiments and the linearized analysis, a robust Godunov-type scheme with a simple cure for the shock instability is suggested. With only the dissipation corresponding to shear waves introduced in the vicinity of strong shocks, the instability problem is circumvented. Numerical results of several carefully chosen strong shock wave problems are investigated to demonstrate the robustness of the proposed scheme.

  9. Processing strategy for water-gun seismic data from the Gulf of Mexico

    USGS Publications Warehouse

    Lee, Myung W.; Hart, Patrick E.; Agena, Warren F.

    2000-01-01

    In order to study the regional distribution of gas hydrates and their potential relationship to a large-scale sea-fl oor failures, more than 1,300 km of near-vertical-incidence seismic profi les were acquired using a 15-in3 water gun across the upper- and middle-continental slope in the Garden Banks and Green Canyon regions of the Gulf of Mexico. Because of the highly mixed phase water-gun signature, caused mainly by a precursor of the source arriving about 18 ms ahead of the main pulse, a conventional processing scheme based on the minimum phase assumption is not suitable for this data set. A conventional processing scheme suppresses the reverberations and compresses the main pulse, but the failure to suppress precursors results in complex interference between the precursors and primary refl ections, thus obscuring true refl ections. To clearly image the subsurface without interference from the precursors, a wavelet deconvolution based on the mixedphase assumption using variable norm is attempted. This nonminimum- phase wavelet deconvolution compresses a longwave- train water-gun signature into a simple zero-phase wavelet. A second-zero-crossing predictive deconvolution followed by a wavelet deconvolution suppressed variable ghost arrivals attributed to the variable depths of receivers. The processing strategy of using wavelet deconvolution followed by a secondzero- crossing deconvolution resulted in a sharp and simple wavelet and a better defi nition of the polarity of refl ections. Also, the application of dip moveout correction enhanced lateral resolution of refl ections and substantially suppressed coherent noise.

  10. The method of space-time and conservation element and solution element: A new approach for solving the Navier-Stokes and Euler equations

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    1995-01-01

    A new numerical framework for solving conservation laws is being developed. This new framework differs substantially in both concept and methodology from the well-established methods, i.e., finite difference, finite volume, finite element, and spectral methods. It is conceptually simple and designed to overcome several key limitations of the above traditional methods. A two-level scheme for solving the convection-diffusion equation is constructed and used to illuminate the major differences between the present method and those previously mentioned. This explicit scheme, referred to as the a-mu scheme, has two independent marching variables.

  11. QKD using polarization encoding with active measurement basis selection

    NASA Astrophysics Data System (ADS)

    Duplinskiy, A.; Ustimchik, V.; Kanapin, A.; Kurochkin, Y.

    2017-11-01

    We report a proof-of-principle quantum key distribution experiment using a one-way optical scheme with polarization encoding implementing the BB84 protocol. LiNbO3 phase modulators are used for generating polarization states for Alice and active basis selection for Bob. This allows the former to use a single laser source, while the latter needs only two single-photon detectors. The presented optical scheme is simple and consists of standard fiber components. Calibration algorithm for three polarization controllers used in the scheme has been developed. The experiment was carried with 10 MHz repetition frequency laser pulses over a distance of 50 km of standard telecom optical fiber.

  12. Power corrections in the N -jettiness subtraction scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boughezal, Radja; Liu, Xiaohui; Petriello, Frank

    We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less

  13. Power corrections in the N -jettiness subtraction scheme

    DOE PAGES

    Boughezal, Radja; Liu, Xiaohui; Petriello, Frank

    2017-03-30

    We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less

  14. Teleportation of a Toffoli gate among distant solid-state qubits with quantum dots embedded in optical microcavities

    PubMed Central

    Hu, Shi; Cui, Wen-Xue; Wang, Dong-Yang; Bai, Cheng-Hua; Guo, Qi; Wang, Hong-Fu; Zhu, Ai-Dong; Zhang, Shou

    2015-01-01

    Teleportation of unitary operations can be viewed as a quantum remote control. The remote realization of robust multiqubit logic gates among distant long-lived qubit registers is a key challenge for quantum computation and quantum information processing. Here we propose a simple and deterministic scheme for teleportation of a Toffoli gate among three spatially separated electron spin qubits in optical microcavities by using local linear optical operations, an auxiliary electron spin, two circularly-polarized entangled photon pairs, photon measurements, and classical communication. We assess the feasibility of the scheme and show that the scheme can be achieved with high average fidelity under the current technology. The scheme opens promising perspectives for constructing long-distance quantum communication and quantum computation networks with solid-state qubits. PMID:26225781

  15. Teleportation of a Toffoli gate among distant solid-state qubits with quantum dots embedded in optical microcavities.

    PubMed

    Hu, Shi; Cui, Wen-Xue; Wang, Dong-Yang; Bai, Cheng-Hua; Guo, Qi; Wang, Hong-Fu; Zhu, Ai-Dong; Zhang, Shou

    2015-07-30

    Teleportation of unitary operations can be viewed as a quantum remote control. The remote realization of robust multiqubit logic gates among distant long-lived qubit registers is a key challenge for quantum computation and quantum information processing. Here we propose a simple and deterministic scheme for teleportation of a Toffoli gate among three spatially separated electron spin qubits in optical microcavities by using local linear optical operations, an auxiliary electron spin, two circularly-polarized entangled photon pairs, photon measurements, and classical communication. We assess the feasibility of the scheme and show that the scheme can be achieved with high average fidelity under the current technology. The scheme opens promising perspectives for constructing long-distance quantum communication and quantum computation networks with solid-state qubits.

  16. Cryptanalysis of SFLASH with Slightly Modified Parameters

    NASA Astrophysics Data System (ADS)

    Dubois, Vivien; Fouque, Pierre-Alain; Stern, Jacques

    SFLASH is a signature scheme which belongs to a family of multivariate schemes proposed by Patarin et al. in 1998 [9]. The SFLASH scheme itself has been designed in 2001 [8] and has been selected in 2003 by the NESSIE European Consortium [6] as the best known solution for implementation on low cost smart cards. In this paper, we show that slight modifications of the parameters of SFLASH within the general family initially proposed renders the scheme insecure. The attack uses simple linear algebra, and allows to forge a signature for an arbitrary message in a question of minutes for practical parameters, using only the public key. Although SFLASH itself is not amenable to our attack, it is worrying to observe that no rationale was ever offered for this "lucky" choice of parameters.

  17. Guidance concepts for time-based flight operations

    NASA Technical Reports Server (NTRS)

    Vicroy, Dan D.

    1990-01-01

    Airport congestion and the associated delays are severe in today's airspace system and are expected to increase. NASA and the FAA is investigating various methods of alleviating this problem through new technology and operational procedures. One concept for improving airspace productivity is time-based control of aircraft. Research to date has focused primarily on the development of time-based flight management systems and Air Traffic Control operational procedures. Flight operations may, however, require special onboard guidance in order to satisfy the Air Traffic Control imposed time constraints. The results are presented of a simulation study aimed at evaluating several time-based guidance concepts in terms of tracking performance, pilot workload, and subjective preference. The guidance concepts tested varied in complexity from simple digital time-error feedback to an advanced time-referenced-energy guidance scheme.

  18. Dynamic Sensor Networks

    DTIC Science & Technology

    2004-03-01

    turned off. SLEEP Set the timer for 30 seconds before scheduled transmit time, then sleep the processor. WAKE When timer trips, power up the processor...slots where none of its neighbors are schedule to transmit. This allows the sensor nodes to perform a simple power man- agement scheme that puts the...routing This simple case study highlights the following crucial observation: optimal traffic scheduling in energy constrained networks requires future

  19. Spray algorithm without interface construction

    NASA Astrophysics Data System (ADS)

    Al-Kadhem Majhool, Ahmed Abed; Watkins, A. P.

    2012-05-01

    This research is aimed to create a new and robust family of convective schemes to capture the interface between the dispersed and the carrier phases in a spray without the need to build up the interface boundary. The selection of the Weighted Average Flux (WAF) scheme is due to this scheme being designed to deal with random flux scheme which is second-order accurate in space and time. The convective flux in each cell face utilizes the WAF scheme blended with Switching Technique for Advection and Capturing of Surfaces (STACS) scheme for high resolution flux limiters. In the next step, the high resolution scheme is blended with the WAF scheme to provide the sharpness and boundedness of the interface by using switching strategy. In this work, the Eulerian-Eulerian framework of non-reactive turbulent spray is set in terms of theoretical proposed methodology namely spray moments of drop size distribution, presented by Beck and Watkins [1]. The computational spray model avoids the need to segregate the local droplet number distribution into parcels of identical droplets. The proposed scheme is tested on capturing the spray edges in modelling hollow cone sprays without need to reconstruct two-phase interface. A test is made on simple comparison between TVD scheme and WAF scheme using the same flux limiter on convective flow hollow cone spray. Results show the WAF scheme gives a better prediction than TVD scheme. The only way to check the accuracy of the presented models is by evaluating the spray sheet thickness.

  20. Development Of A Data Assimilation Capability For RAPID

    NASA Astrophysics Data System (ADS)

    Emery, C. M.; David, C. H.; Turmon, M.; Hobbs, J.; Allen, G. H.; Famiglietti, J. S.

    2017-12-01

    The global decline of in situ observations associated with the increasing ability to monitor surface water from space motivates the creation of data assimilation algorithms that merge computer models and space-based observations to produce consistent estimates of terrestrial hydrology that fill the spatiotemporal gaps in observations. RAPID is a routing model based on the Muskingum method that is capable of estimating river streamflow over large scales with a relatively short computing time. This model only requires limited inputs: a reach-based river network, and lateral surface and subsurface flow into the rivers. The relatively simple model physics imply that RAPID simulations could be significantly improved by including a data assimilation capability. Here we present the early developments of such data assimilation approach into RAPID. Given the linear and matrix-based structure of the model, we chose to apply a direct Kalman filter, hence allowing for the preservation of high computational speed. We correct the simulated streamflows by assimilating streamflow observations and our early results demonstrate the feasibility of the approach. Additionally, the use of in situ gauges at continental scales motivates the application of our new data assimilation scheme to altimetry measurements from existing (e.g. EnviSat, Jason 2) and upcoming satellite missions (e.g. SWOT), and ultimately apply the scheme globally.

  1. Robust LS-SVM-based adaptive constrained control for a class of uncertain nonlinear systems with time-varying predefined performance

    NASA Astrophysics Data System (ADS)

    Luo, Jianjun; Wei, Caisheng; Dai, Honghua; Yuan, Jianping

    2018-03-01

    This paper focuses on robust adaptive control for a class of uncertain nonlinear systems subject to input saturation and external disturbance with guaranteed predefined tracking performance. To reduce the limitations of classical predefined performance control method in the presence of unknown initial tracking errors, a novel predefined performance function with time-varying design parameters is first proposed. Then, aiming at reducing the complexity of nonlinear approximations, only two least-square-support-vector-machine-based (LS-SVM-based) approximators with two design parameters are required through norm form transformation of the original system. Further, a novel LS-SVM-based adaptive constrained control scheme is developed under the time-vary predefined performance using backstepping technique. Wherein, to avoid the tedious analysis and repeated differentiations of virtual control laws in the backstepping technique, a simple and robust finite-time-convergent differentiator is devised to only extract its first-order derivative at each step in the presence of external disturbance. In this sense, the inherent demerit of backstepping technique-;explosion of terms; brought by the recursive virtual controller design is conquered. Moreover, an auxiliary system is designed to compensate the control saturation. Finally, three groups of numerical simulations are employed to validate the effectiveness of the newly developed differentiator and the proposed adaptive constrained control scheme.

  2. Sonographic Diagnosis of Tubal Cancer with IOTA Simple Rules Plus Pattern Recognition

    PubMed Central

    Tongsong, Theera; Wanapirak, Chanane; Tantipalakorn, Charuwan; Tinnangwattana, Dangcheewan

    2017-01-01

    Objective: To evaluate diagnostic performance of IOTA simple rules plus pattern recognition in predicting tubal cancer. Methods: Secondary analysis was performed on prospective database of our IOTA project. The patients recruited in the project were those who were scheduled for pelvic surgery due to adnexal masses. The patients underwent ultrasound examinations within 24 hours before surgery. On ultrasound examination, the masses were evaluated using the well-established IOTA simple rules plus pattern recognition (sausage-shaped appearance, incomplete septum, visible ipsilateral ovaries) to predict tubal cancer. The gold standard diagnosis was based on histological findings or operative findings. Results: A total of 482 patients, including 15 cases of tubal cancer, were evaluated by ultrasound preoperatively. The IOTA simple rules plus pattern recognition gave a sensitivity of 86.7% (13 in 15) and specificity of 97.4%. Sausage-shaped appearance was identified in nearly all cases (14 in 15). Incomplete septa and normal ovaries could be identified in 33.3% and 40%, respectively. Conclusion: IOTA simple rules plus pattern recognition is relatively effective in predicting tubal cancer. Thus, we propose the simple scheme in diagnosis of tubal cancer as follows. First of all, the adnexal masses are evaluated with IOTA simple rules. If the B-rules could be applied, tubal cancer is reliably excluded. If the M-rules could be applied or the result is inconclusive, careful delineation of the mass with pattern recognition should be performed. PMID:29172273

  3. Sonographic Diagnosis of Tubal Cancer with IOTA Simple Rules Plus Pattern Recognition

    PubMed

    Tongsong, Theera; Wanapirak, Chanane; Tantipalakorn, Charuwan; Tinnangwattana, Dangcheewan

    2017-11-26

    Objective: To evaluate diagnostic performance of IOTA simple rules plus pattern recognition in predicting tubal cancer. Methods: Secondary analysis was performed on prospective database of our IOTA project. The patients recruited in the project were those who were scheduled for pelvic surgery due to adnexal masses. The patients underwent ultrasound examinations within 24 hours before surgery. On ultrasound examination, the masses were evaluated using the well-established IOTA simple rules plus pattern recognition (sausage-shaped appearance, incomplete septum, visible ipsilateral ovaries) to predict tubal cancer. The gold standard diagnosis was based on histological findings or operative findings. Results: A total of 482 patients, including 15 cases of tubal cancer, were evaluated by ultrasound preoperatively. The IOTA simple rules plus pattern recognition gave a sensitivity of 86.7% (13 in 15) and specificity of 97.4%. Sausage-shaped appearance was identified in nearly all cases (14 in 15). Incomplete septa and normal ovaries could be identified in 33.3% and 40%, respectively. Conclusion: IOTA simple rules plus pattern recognition is relatively effective in predicting tubal cancer. Thus, we propose the simple scheme in diagnosis of tubal cancer as follows. First of all, the adnexal masses are evaluated with IOTA simple rules. If the B-rules could be applied, tubal cancer is reliably excluded. If the M-rules could be applied or the result is inconclusive, careful delineation of the mass with pattern recognition should be performed. Creative Commons Attribution License

  4. Efficient algorithms for the simulation of non-adiabatic electron transfer in complex molecular systems: application to DNA.

    PubMed

    Kubař, Tomáš; Elstner, Marcus

    2013-04-28

    In this work, a fragment-orbital density functional theory-based method is combined with two different non-adiabatic schemes for the propagation of the electronic degrees of freedom. This allows us to perform unbiased simulations of electron transfer processes in complex media, and the computational scheme is applied to the transfer of a hole in solvated DNA. It turns out that the mean-field approach, where the wave function of the hole is driven into a superposition of adiabatic states, leads to over-delocalization of the hole charge. This problem is avoided using a surface hopping scheme, resulting in a smaller rate of hole transfer. The method is highly efficient due to the on-the-fly computation of the coarse-grained DFT Hamiltonian for the nucleobases, which is coupled to the environment using a QM/MM approach. The computational efficiency and partial parallel character of the methodology make it possible to simulate electron transfer in systems of relevant biochemical size on a nanosecond time scale. Since standard non-polarizable force fields are applied in the molecular-mechanics part of the calculation, a simple scaling scheme was introduced into the electrostatic potential in order to simulate the effect of electronic polarization. It is shown that electronic polarization has an important effect on the features of charge transfer. The methodology is applied to two kinds of DNA sequences, illustrating the features of transfer along a flat energy landscape as well as over an energy barrier. The performance and relative merit of the mean-field scheme and the surface hopping for this application are discussed.

  5. Advances in Optical Fiber-Based Faraday Rotation Diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, A D; McHale, G B; Goerz, D A

    2009-07-27

    In the past two years, we have used optical fiber-based Faraday Rotation Diagnostics (FRDs) to measure pulsed currents on several dozen capacitively driven and explosively driven pulsed power experiments. We have made simplifications to the necessary hardware for quadrature-encoded polarization analysis, including development of an all-fiber analysis scheme. We have developed a numerical model that is useful for predicting and quantifying deviations from the ideal diagnostic response. We have developed a method of analyzing quadrature-encoded FRD data that is simple to perform and offers numerous advantages over several existing methods. When comparison has been possible, we have seen good agreementmore » with our FRDs and other current sensors.« less

  6. Quantum Approximate Methods for the Atomistic Modeling of Multicomponent Alloys. Chapter 7

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Garces, Jorge; Mosca, Hugo; Gargano, pablo; Noebe, Ronald D.; Abel, Phillip

    2007-01-01

    This chapter describes the role of quantum approximate methods in the understanding of complex multicomponent alloys at the atomic level. The need to accelerate materials design programs based on economical and efficient modeling techniques provides the framework for the introduction of approximations and simplifications in otherwise rigorous theoretical schemes. As a promising example of the role that such approximate methods might have in the development of complex systems, the BFS method for alloys is presented and applied to Ru-rich Ni-base superalloys and also to the NiAI(Ti,Cu) system, highlighting the benefits that can be obtained from introducing simple modeling techniques to the investigation of such complex systems.

  7. Frequency domain surface EMG sensor fusion for estimating finger forces.

    PubMed

    Potluri, Chandrasekhar; Kumar, Parmod; Anugolu, Madhavi; Urfer, Alex; Chiu, Steve; Naidu, D; Schoen, Marco P

    2010-01-01

    Extracting or estimating skeletal hand/finger forces using surface electro myographic (sEMG) signals poses many challenges due to cross-talk, noise, and a temporal and spatially modulated signal characteristics. Normal sEMG measurements are based on single sensor data. In this paper, array sensors are used along with a proposed sensor fusion scheme that result in a simple Multi-Input-Single-Output (MISO) transfer function. Experimental data is used along with system identification to find this MISO system. A Genetic Algorithm (GA) approach is employed to optimize the characteristics of the MISO system. The proposed fusion-based approach is tested experimentally and indicates improvement in finger/hand force estimation.

  8. A simple optical model to estimate suspended particulate matter in Yellow River Estuary.

    PubMed

    Qiu, Zhongfeng

    2013-11-18

    Distribution of the suspended particulate matter (SPM) concentration is a key issue for analyzing the deposition and erosion variety of the estuary and evaluating the material fluxes from river to sea. Satellite remote sensing is a useful tool to investigate the spatial variation of SPM concentration in estuarial zones. However, algorithm developments and validations of the SPM concentrations in Yellow River Estuary (YRE) have been seldom performed before and therefore our knowledge on the quality of retrieval of SPM concentration is poor. In this study, we developed a new simple optical model to estimate SPM concentration in YRE by specifying the optimal wavelength ratios (600-710 nm)/ (530-590 nm) based on observations of 5 cruises during 2004 and 2011. The simple optical model was attentively calibrated and the optimal band ratios were selected for application to multiple sensors, 678/551 for the Moderate Resolution Imaging Spectroradiometer (MODIS), 705/560 for the Medium Resolution Imaging Spectrometer (MERIS) and 680/555 for the Geostationary Ocean Color Imager (GOCI). With the simple optical model, the relative percentage difference and the mean absolute error were 35.4% and 15.6 gm(-3) respectively for MODIS, 42.2% and 16.3 gm(-3) for MERIS, and 34.2% and 14.7 gm(-3) for GOCI, based on an independent validation data set. Our results showed a good precision of estimation for SPM concentration using the new simple optical model, contrasting with the poor estimations derived from existing empirical models. Providing an available atmospheric correction scheme for satellite imagery, our simple model could be used for quantitative monitoring of SPM concentrations in YRE.

  9. 3D CSEM data inversion using Newton and Halley class methods

    NASA Astrophysics Data System (ADS)

    Amaya, M.; Hansen, K. R.; Morten, J. P.

    2016-05-01

    For the first time in 3D controlled source electromagnetic data inversion, we explore the use of the Newton and the Halley optimization methods, which may show their potential when the cost function has a complex topology. The inversion is formulated as a constrained nonlinear least-squares problem which is solved by iterative optimization. These methods require the derivatives up to second order of the residuals with respect to model parameters. We show how Green's functions determine the high-order derivatives, and develop a diagrammatical representation of the residual derivatives. The Green's functions are efficiently calculated on-the-fly, making use of a finite-difference frequency-domain forward modelling code based on a multi-frontal sparse direct solver. This allow us to build the second-order derivatives of the residuals keeping the memory cost in the same order as in a Gauss-Newton (GN) scheme. Model updates are computed with a trust-region based conjugate-gradient solver which does not require the computation of a stabilizer. We present inversion results for a synthetic survey and compare the GN, Newton, and super-Halley optimization schemes, and consider two different approaches to set the initial trust-region radius. Our analysis shows that the Newton and super-Halley schemes, using the same regularization configuration, add significant information to the inversion so that the convergence is reached by different paths. In our simple resistivity model examples, the convergence speed of the Newton and the super-Halley schemes are either similar or slightly superior with respect to the convergence speed of the GN scheme, close to the minimum of the cost function. Due to the current noise levels and other measurement inaccuracies in geophysical investigations, this advantageous behaviour is at present of low consequence, but may, with the further improvement of geophysical data acquisition, be an argument for more accurate higher-order methods like those applied in this paper.

  10. Renormalization scheme dependence of high-order perturbative QCD predictions

    NASA Astrophysics Data System (ADS)

    Ma, Yang; Wu, Xing-Gang

    2018-02-01

    Conventionally, one adopts typical momentum flow of a physical observable as the renormalization scale for its perturbative QCD (pQCD) approximant. This simple treatment leads to renormalization scheme-and-scale ambiguities due to the renormalization scheme and scale dependence of the strong coupling and the perturbative coefficients do not exactly cancel at any fixed order. It is believed that those ambiguities will be softened by including more higher-order terms. In the paper, to show how the renormalization scheme dependence changes when more loop terms have been included, we discuss the sensitivity of pQCD prediction on the scheme parameters by using the scheme-dependent {βm ≥2}-terms. We adopt two four-loop examples, e+e-→hadrons and τ decays into hadrons, for detailed analysis. Our results show that under the conventional scale setting, by including more-and-more loop terms, the scheme dependence of the pQCD prediction cannot be reduced as efficiently as that of the scale dependence. Thus a proper scale-setting approach should be important to reduce the scheme dependence. We observe that the principle of minimum sensitivity could be such a scale-setting approach, which provides a practical way to achieve optimal scheme and scale by requiring the pQCD approximate be independent to the "unphysical" theoretical conventions.

  11. Automatic epileptic seizure detection in EEGs using MF-DFA, SVM based on cloud computing.

    PubMed

    Zhang, Zhongnan; Wen, Tingxi; Huang, Wei; Wang, Meihong; Li, Chunfeng

    2017-01-01

    Epilepsy is a chronic disease with transient brain dysfunction that results from the sudden abnormal discharge of neurons in the brain. Since electroencephalogram (EEG) is a harmless and noninvasive detection method, it plays an important role in the detection of neurological diseases. However, the process of analyzing EEG to detect neurological diseases is often difficult because the brain electrical signals are random, non-stationary and nonlinear. In order to overcome such difficulty, this study aims to develop a new computer-aided scheme for automatic epileptic seizure detection in EEGs based on multi-fractal detrended fluctuation analysis (MF-DFA) and support vector machine (SVM). New scheme first extracts features from EEG by MF-DFA during the first stage. Then, the scheme applies a genetic algorithm (GA) to calculate parameters used in SVM and classify the training data according to the selected features using SVM. Finally, the trained SVM classifier is exploited to detect neurological diseases. The algorithm utilizes MLlib from library of SPARK and runs on cloud platform. Applying to a public dataset for experiment, the study results show that the new feature extraction method and scheme can detect signals with less features and the accuracy of the classification reached up to 99%. MF-DFA is a promising approach to extract features for analyzing EEG, because of its simple algorithm procedure and less parameters. The features obtained by MF-DFA can represent samples as well as traditional wavelet transform and Lyapunov exponents. GA can always find useful parameters for SVM with enough execution time. The results illustrate that the classification model can achieve comparable accuracy, which means that it is effective in epileptic seizure detection.

  12. Effective field theory dimensional regularization

    NASA Astrophysics Data System (ADS)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  13. Decentralized Adaptive Control For Robots

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1989-01-01

    Precise knowledge of dynamics not required. Proposed scheme for control of multijointed robotic manipulator calls for independent control subsystem for each joint, consisting of proportional/integral/derivative feedback controller and position/velocity/acceleration feedforward controller, both with adjustable gains. Independent joint controller compensates for unpredictable effects, gravitation, and dynamic coupling between motions of joints, while forcing joints to track reference trajectories. Scheme amenable to parallel processing in distributed computing system wherein each joint controlled by relatively simple algorithm on dedicated microprocessor.

  14. Experimental verification of Pyragas-Schöll-Fiedler control.

    PubMed

    von Loewenich, Clemens; Benner, Hartmut; Just, Wolfram

    2010-09-01

    We present an experimental realization of time-delayed feedback control proposed by Schöll and Fiedler. The scheme enables us to stabilize torsion-free periodic orbits in autonomous systems, and to overcome the so-called odd number limitation. The experimental control performance is in quantitative agreement with the bifurcation analysis of simple model systems. The results uncover some general features of the control scheme which are deemed to be relevant for a large class of setups.

  15. [Differentiation of species within the Mycobacterium tuberculosis complex by molecular techniques].

    PubMed

    Herrera-León, Laura; Pozuelo-Díaz, Rodolfo; Molina Moreno, Tamara; Valverde Cobacho, Azucena; Saiz Vega, Pilar; Jiménez Pajares, María Soledad

    2009-11-01

    The Mycobacterium tuberculosis complex includes the following species: Mycobacterium tuberculosis, Mycobacterium africanum, Mycobacterium bovis, Mycobacterium bovis-BCG, Mycobacterium microti, Mycobacterium caprae, Mycobacterium pinnipedii, and Mycobacterium canettii. These species cause tuberculosis in humans and animals. Identification of mycobacterial strains has classically been performed by phenotype study. Over the last years, laboratories have developed several molecular techniques to differentiate between these species. The aim of this study is to evaluate these methods and develop a simple, fast, identification scheme. We analyzed 251 strains randomly obtained from the strains studied in 2004, and 797 strains received by the Reference Laboratory between 2005 and 2007. Phenotype characterization of 4183 strains isolated during that period was done by studying the colony morphology, characteristics in culture, nitrate reduction, niacin accumulation, and growth in the presence of thiophen-2-carboxylic acid hydrazide 10 microg/mL and pyrazinamide 50 microg/mL. The molecular identification scheme designed was as follows: 1) gyrB PCR-RFLP with RsaI, TaqI or SacII and hsp65 RFLP/PCR with HhaI., and 2) multiplex-PCR to determine the presence/absence of the RD9 and RD1 regions. The results showed 100% agreement between phenotype study and the molecular scheme. This molecular identification scheme is a simple and fast method, with 100% sensitivity and specificity, that can be implemented in most clinical laboratories at a low cost.

  16. Directionality compensation for linear multivariable anti-windup synthesis

    NASA Astrophysics Data System (ADS)

    Adegbege, Ambrose A.; Heath, William P.

    2015-11-01

    We develop new synthesis procedures for optimising anti-windup control applicable to open-loop exponentially stable multivariable plants subject to hard bounds on the inputs. The optimising anti-windup control falls into a class of compensator commonly termed directionality compensation. The computation of the control involves the online solution of a low-order quadratic programme in place of simple saturation. We exploit the structure of the quadratic programme to incorporate directionality information into the offline anti-windup synthesis using a decoupled architecture similar to that proposed in the literature for anti-windup schemes with simple saturation. We demonstrate the effectiveness of the design compared to several schemes using a simulated example. Preliminary results of this work have been published in the proceedings of the IEEE Conference on Decision and Control, Orlando, 2011 (Adegbege & Heath, 2011a).

  17. A Mass-Flux Scheme View of a High-Resolution Simulation of a Transition from Shallow to Deep Cumulus Convection.

    NASA Astrophysics Data System (ADS)

    Kuang, Zhiming; Bretherton, Christopher S.

    2006-07-01

    In this paper, an idealized, high-resolution simulation of a gradually forced transition from shallow, nonprecipitating to deep, precipitating cumulus convection is described; how the cloud and transport statistics evolve as the convection deepens is explored; and the collected statistics are used to evaluate assumptions in current cumulus schemes. The statistical analysis methodologies that are used do not require tracing the history of individual clouds or air parcels; instead they rely on probing the ensemble characteristics of cumulus convection in the large model dataset. They appear to be an attractive way for analyzing outputs from cloud-resolving numerical experiments. Throughout the simulation, it is found that 1) the initial thermodynamic properties of the updrafts at the cloud base have rather tight distributions; 2) contrary to the assumption made in many cumulus schemes, nearly undiluted air parcels are too infrequent to be relevant to any stage of the simulated convection; and 3) a simple model with a spectrum of entraining plumes appears to reproduce most features of the cloudy updrafts, but significantly overpredicts the mass flux as the updrafts approach their levels of zero buoyancy. A buoyancy-sorting model was suggested as a potential remedy. The organized circulations of cold pools seem to create clouds with larger-sized bases and may correspondingly contribute to their smaller lateral entrainment rates. Our results do not support a mass-flux closure based solely on convective available potential energy (CAPE), and are in general agreement with a convective inhibition (CIN)-based closure. The general similarity in the ensemble characteristics of shallow and deep convection and the continuous evolution of the thermodynamic structure during the transition provide justification for developing a single unified cumulus parameterization that encompasses both shallow and deep convection.


  18. A class of the van Leer-type transport schemes and its application to the moisture transport in a general circulation model

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Chao, Winston C.; Sud, Y. C.; Walker, G. K.

    1994-01-01

    A generalized form of the second-order van Leer transport scheme is derived. Several constraints to the implied subgrid linear distribution are discussed. A very simple positive-definite scheme can be derived directly from the generalized form. A monotonic version of the scheme is applied to the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) for the moisture transport calculations, replacing the original fourth-order center-differencing scheme. Comparisons with the original scheme are made in idealized tests as well as in a summer climate simulation using the full GLA GCM. A distinct advantage of the monotonic transport scheme is its ability to transport sharp gradients without producing spurious oscillations and unphysical negative mixing ratio. Within the context of low-resolution climate simulations, the aforementioned characteristics are demonstrated to be very beneficial in regions where cumulus convection is active. The model-produced precipitation pattern using the new transport scheme is more coherently organized both in time and in space, and correlates better with observations. The side effect of the filling algorithm used in conjunction with the original scheme is also discussed, in the context of idealized tests. The major weakness of the proposed transport scheme with a local monotonic constraint is its substantial implicit diffusion at low resolution. Alternative constraints are discussed to counter this problem.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheinker, Alexander

    Here, we study control of the angular-velocity actuated nonholonomic unicycle, via a simple, bounded extremum seeking controller which is robust to external disturbances and measurement noise. The vehicle performs source seeking despite not having any position information about itself or the source, able only to sense a noise corrupted scalar value whose extremum coincides with the unknown source location. In order to control the angular velocity, rather than the angular heading directly, a controller is developed such that the closed loop system exhibits multiple time scales and requires an analysis approach expanding the previous work of Kurzweil, Jarnik, Sussmann, andmore » Liu, utilizing weak limits. We provide analytic proof of stability and demonstrate how this simple scheme can be extended to include position-independent source seeking, tracking, and collision avoidance of groups on autonomous vehicles in GPS-denied environments, based only on a measure of distance to an obstacle, which is an especially important feature for an autonomous agent.« less

  20. A simple-architecture fibered transmission system for dissemination of high stability 100 MHz signals

    NASA Astrophysics Data System (ADS)

    Bakir, A.; Rocher, C.; Maréchal, B.; Bigler, E.; Boudot, R.; Kersalé, Y.; Millo, J.

    2018-05-01

    We report on the development of a simple-architecture fiber-based frequency distribution system used to transfer high frequency stability 100 MHz signals. This work is focused on the emitter and the receiver performances that allow the transmission of the radio-frequency signal over an optical fiber. The system exhibits a residual fractional frequency stability of 1 × 10-14 at 1 s integration time and in the low 10-16 range after 100 s. These performances are suitable to transfer the signal of frequency references such as those of a state-of-the-art hydrogen maser without any phase noise compensation scheme. As an application, we demonstrate the dissemination of such a signal through a 100 m long optical fiber without any degradation. The proposed setup could be easily extended for operating frequencies in the 10 MHz-1 GHz range.

Top